Sat May 27 13:09:38 UTC 2023 I: starting to build bison/bookworm/armhf on jenkins on '2023-05-27 13:09' Sat May 27 13:09:38 UTC 2023 I: The jenkins build log is/was available at https://jenkins.debian.net/userContent/reproducible/debian/build_service/armhf_24/780/console.log Sat May 27 13:09:38 UTC 2023 I: Downloading source for bookworm/bison=2:3.8.2+dfsg-1 --2023-05-27 13:09:38-- http://cdn-fastly.deb.debian.org/debian/pool/main/b/bison/bison_3.8.2%2bdfsg-1.dsc Connecting to 78.137.99.97:3128... connected. Proxy request sent, awaiting response... 200 OK Length: 1855 (1.8K) [text/prs.lines.tag] Saving to: ‘bison_3.8.2+dfsg-1.dsc’ 0K . 100% 134M=0s 2023-05-27 13:09:38 (134 MB/s) - ‘bison_3.8.2+dfsg-1.dsc’ saved [1855/1855] Sat May 27 13:09:38 UTC 2023 I: bison_3.8.2+dfsg-1.dsc -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512 Format: 3.0 (quilt) Source: bison Binary: bison, libbison-dev Architecture: any Version: 2:3.8.2+dfsg-1 Maintainer: Chuan-kai Lin Homepage: https://www.gnu.org/software/bison/ Standards-Version: 4.6.0 Build-Depends: debhelper-compat (= 12), gettext, flex, m4 (>= 1.4-14), help2man Package-List: bison deb devel optional arch=any libbison-dev deb libdevel optional arch=any Checksums-Sha1: 138e267dd2e16ea4c6eedf0ad57d4e174764b3df 2654920 bison_3.8.2+dfsg.orig.tar.xz 1876ee15e10da593c27731e4e9098986911ff4d4 11164 bison_3.8.2+dfsg-1.debian.tar.xz Checksums-Sha256: dff8a3c96dd34121828f62a7fa49e1f7765815b89e59f564e8d2a9e71c177be5 2654920 bison_3.8.2+dfsg.orig.tar.xz e878473ddbc8601c1b9b8558fab86950ae6ae5b7c146802d587e284d5e1c9abb 11164 bison_3.8.2+dfsg-1.debian.tar.xz Files: a502bd24cdca3c17eb648b5a9132613d 2654920 bison_3.8.2+dfsg.orig.tar.xz e152dd8a66a18274d5099bb780ac75f8 11164 bison_3.8.2+dfsg-1.debian.tar.xz -----BEGIN PGP SIGNATURE----- iQJFBAEBCgAvFiEEpjo/UW6i/KKi+2ONAbOplSquRxMFAmFY+B4RHGNrbGluQGRl Ymlhbi5vcmcACgkQAbOplSquRxPyehAAxsCWKLy+Mr9nUkv0KWUEcLsS882kllvf /gTIkIbXDAptcoIkV9dWZHaMjN1Bw6w14eUvjifNahKDjwRU2hUbtyiwf6umyUbZ vrw6HOfOdiBqaZUyCkZGJAjp3kUP14ZnLe4s9JE5q1EG5LQqboKNkUaF9fq8OTOF o+z0duhskgni7zgxTcJC4+GwrR5uyXf2J4F96PVBZcmUzCx2jMGlWhVEJErRCtZH OJa2FM4enNbSBkp+uY659ynegru+sKUfmFOP8Y5vCLhPVtIZk7ZkYwtt8Sh4hc8Y aUBBZSmk4KkEv8SI2C7JB89lvElShP2HuZjVki29TlJhcvftKkyY/1YUBO7j68oO Ak5XXK4WGBSlA5QiJC1xu3ttptzRltFXOmkYAV3sUtGlwYg/UcaooJKSFdtftK9w udJiFCNKU0Ag+EDl+rrlJ6SuwWm0vCEb50dnxmp92QPoZGC84eVZFHioOgRk40K8 8YIaqVN1/5sVqueVpNuOAaN90nav4rDeQdVgwymD+dIlVyc2DAHU2/gcEDawNMaG uTEjEsBfGTMlqEfVgkY7ySQv6+J+LuPvaYXBKVWdyXHZV7IF54N2IQPTyqGRiK/p dwvH34vAUzPk/Qn9WIj68p49Kfc6d6iO8MphWnw6kBSOc/m3B1Q6aQo/0qHUHVA5 04WcX8EsdJI= =75vu -----END PGP SIGNATURE----- Sat May 27 13:09:38 UTC 2023 I: Checking whether the package is not for us Sat May 27 13:09:38 UTC 2023 I: Starting 1st build on remote node ff4a-armhf-rb.debian.net. Sat May 27 13:09:38 UTC 2023 I: Preparing to do remote build '1' on ff4a-armhf-rb.debian.net. Sat May 27 17:16:05 UTC 2023 I: Deleting $TMPDIR on ff4a-armhf-rb.debian.net. I: pbuilder: network access will be disabled during build I: Current time: Sat May 27 01:09:46 -12 2023 I: pbuilder-time-stamp: 1685192986 I: Building the build Environment I: extracting base tarball [/var/cache/pbuilder/bookworm-reproducible-base.tgz] I: copying local configuration W: --override-config is not set; not updating apt.conf Read the manpage for details. I: mounting /proc filesystem I: mounting /sys filesystem I: creating /{dev,run}/shm I: mounting /dev/pts filesystem I: redirecting /dev/ptmx to /dev/pts/ptmx I: policy-rc.d already exists I: Copying source file I: copying [bison_3.8.2+dfsg-1.dsc] I: copying [./bison_3.8.2+dfsg.orig.tar.xz] I: copying [./bison_3.8.2+dfsg-1.debian.tar.xz] I: Extracting source gpgv: Signature made Sat Oct 2 12:23:58 2021 -12 gpgv: using RSA key A63A3F516EA2FCA2A2FB638D01B3A9952AAE4713 gpgv: issuer "cklin@debian.org" gpgv: Can't check signature: No public key dpkg-source: warning: cannot verify inline signature for ./bison_3.8.2+dfsg-1.dsc: no acceptable signature found dpkg-source: info: extracting bison in bison-3.8.2+dfsg dpkg-source: info: unpacking bison_3.8.2+dfsg.orig.tar.xz dpkg-source: info: unpacking bison_3.8.2+dfsg-1.debian.tar.xz dpkg-source: info: using patch list from debian/patches/series dpkg-source: info: applying 01_keep_stamp_files dpkg-source: info: applying 02_parse_h_dependency I: using fakeroot in build. I: Installing the build-deps I: user script /srv/workspace/pbuilder/23384/tmp/hooks/D02_print_environment starting I: set BUILDDIR='/build' BUILDUSERGECOS='first user,first room,first work-phone,first home-phone,first other' BUILDUSERNAME='pbuilder1' BUILD_ARCH='armhf' DEBIAN_FRONTEND='noninteractive' DEB_BUILD_OPTIONS='buildinfo=+all reproducible=+all parallel=3 ' DISTRIBUTION='bookworm' HOME='/root' HOST_ARCH='armhf' IFS=' ' INVOCATION_ID='1948903121d247e0a3388805fd29222b' LANG='C' LANGUAGE='en_US:en' LC_ALL='C' MAIL='/var/mail/root' OPTIND='1' PATH='/usr/sbin:/usr/bin:/sbin:/bin:/usr/games' PBCURRENTCOMMANDLINEOPERATION='build' PBUILDER_OPERATION='build' PBUILDER_PKGDATADIR='/usr/share/pbuilder' PBUILDER_PKGLIBDIR='/usr/lib/pbuilder' PBUILDER_SYSCONFDIR='/etc' PPID='23384' PS1='# ' PS2='> ' PS4='+ ' PWD='/' SHELL='/bin/bash' SHLVL='2' SUDO_COMMAND='/usr/bin/timeout -k 18.1h 18h /usr/bin/ionice -c 3 /usr/bin/nice /usr/sbin/pbuilder --build --configfile /srv/reproducible-results/rbuild-debian/r-b-build.7ctXGpJs/pbuilderrc_Kdql --distribution bookworm --hookdir /etc/pbuilder/first-build-hooks --debbuildopts -b --basetgz /var/cache/pbuilder/bookworm-reproducible-base.tgz --buildresult /srv/reproducible-results/rbuild-debian/r-b-build.7ctXGpJs/b1 --logfile b1/build.log bison_3.8.2+dfsg-1.dsc' SUDO_GID='113' SUDO_UID='107' SUDO_USER='jenkins' TERM='unknown' TZ='/usr/share/zoneinfo/Etc/GMT+12' USER='root' _='/usr/bin/systemd-run' http_proxy='http://10.0.0.15:3142/' I: uname -a Linux ff4a 5.10.0-23-armmp-lpae #1 SMP Debian 5.10.179-1 (2023-05-12) armv7l GNU/Linux I: ls -l /bin total 5072 -rwxr-xr-x 1 root root 838488 Apr 23 09:24 bash -rwxr-xr-x 3 root root 67144 Sep 18 2022 bunzip2 -rwxr-xr-x 3 root root 67144 Sep 18 2022 bzcat lrwxrwxrwx 1 root root 6 Sep 18 2022 bzcmp -> bzdiff -rwxr-xr-x 1 root root 2225 Sep 18 2022 bzdiff lrwxrwxrwx 1 root root 6 Sep 18 2022 bzegrep -> bzgrep -rwxr-xr-x 1 root root 4893 Nov 27 2021 bzexe lrwxrwxrwx 1 root root 6 Sep 18 2022 bzfgrep -> bzgrep -rwxr-xr-x 1 root root 3775 Sep 18 2022 bzgrep -rwxr-xr-x 3 root root 67144 Sep 18 2022 bzip2 -rwxr-xr-x 1 root root 67112 Sep 18 2022 bzip2recover lrwxrwxrwx 1 root root 6 Sep 18 2022 bzless -> bzmore -rwxr-xr-x 1 root root 1297 Sep 18 2022 bzmore -rwxr-xr-x 1 root root 67632 Sep 20 2022 cat -rwxr-xr-x 1 root root 67676 Sep 20 2022 chgrp -rwxr-xr-x 1 root root 67644 Sep 20 2022 chmod -rwxr-xr-x 1 root root 67684 Sep 20 2022 chown -rwxr-xr-x 1 root root 133532 Sep 20 2022 cp -rwxr-xr-x 1 root root 132868 Jan 5 01:20 dash -rwxr-xr-x 1 root root 133220 Sep 20 2022 date -rwxr-xr-x 1 root root 67732 Sep 20 2022 dd -rwxr-xr-x 1 root root 68104 Sep 20 2022 df -rwxr-xr-x 1 root root 133632 Sep 20 2022 dir -rwxr-xr-x 1 root root 59128 Mar 22 21:02 dmesg lrwxrwxrwx 1 root root 8 Dec 19 01:33 dnsdomainname -> hostname lrwxrwxrwx 1 root root 8 Dec 19 01:33 domainname -> hostname -rwxr-xr-x 1 root root 67560 Sep 20 2022 echo -rwxr-xr-x 1 root root 41 Jan 24 02:43 egrep -rwxr-xr-x 1 root root 67548 Sep 20 2022 false -rwxr-xr-x 1 root root 41 Jan 24 02:43 fgrep -rwxr-xr-x 1 root root 55748 Mar 22 21:02 findmnt -rwsr-xr-x 1 root root 26208 Mar 22 20:15 fusermount -rwxr-xr-x 1 root root 128608 Jan 24 02:43 grep -rwxr-xr-x 2 root root 2346 Apr 9 2022 gunzip -rwxr-xr-x 1 root root 6447 Apr 9 2022 gzexe -rwxr-xr-x 1 root root 64220 Apr 9 2022 gzip -rwxr-xr-x 1 root root 67032 Dec 19 01:33 hostname -rwxr-xr-x 1 root root 67720 Sep 20 2022 ln -rwxr-xr-x 1 root root 35132 Mar 22 21:51 login -rwxr-xr-x 1 root root 133632 Sep 20 2022 ls -rwxr-xr-x 1 root root 136808 Mar 22 21:02 lsblk -rwxr-xr-x 1 root root 67800 Sep 20 2022 mkdir -rwxr-xr-x 1 root root 67764 Sep 20 2022 mknod -rwxr-xr-x 1 root root 67596 Sep 20 2022 mktemp -rwxr-xr-x 1 root root 38504 Mar 22 21:02 more -rwsr-xr-x 1 root root 38496 Mar 22 21:02 mount -rwxr-xr-x 1 root root 9824 Mar 22 21:02 mountpoint -rwxr-xr-x 1 root root 133532 Sep 20 2022 mv lrwxrwxrwx 1 root root 8 Dec 19 01:33 nisdomainname -> hostname lrwxrwxrwx 1 root root 14 Apr 2 18:25 pidof -> /sbin/killall5 -rwxr-xr-x 1 root root 67608 Sep 20 2022 pwd lrwxrwxrwx 1 root root 4 Apr 23 09:24 rbash -> bash -rwxr-xr-x 1 root root 67600 Sep 20 2022 readlink -rwxr-xr-x 1 root root 67672 Sep 20 2022 rm -rwxr-xr-x 1 root root 67600 Sep 20 2022 rmdir -rwxr-xr-x 1 root root 67400 Nov 2 2022 run-parts -rwxr-xr-x 1 root root 133372 Jan 5 07:55 sed lrwxrwxrwx 1 root root 4 Jan 5 01:20 sh -> dash -rwxr-xr-x 1 root root 67584 Sep 20 2022 sleep -rwxr-xr-x 1 root root 67644 Sep 20 2022 stty -rwsr-xr-x 1 root root 50800 Mar 22 21:02 su -rwxr-xr-x 1 root root 67584 Sep 20 2022 sync -rwxr-xr-x 1 root root 336764 Apr 6 02:25 tar -rwxr-xr-x 1 root root 67144 Nov 2 2022 tempfile -rwxr-xr-x 1 root root 133224 Sep 20 2022 touch -rwxr-xr-x 1 root root 67548 Sep 20 2022 true -rwxr-xr-x 1 root root 9768 Mar 22 20:15 ulockmgr_server -rwsr-xr-x 1 root root 22108 Mar 22 21:02 umount -rwxr-xr-x 1 root root 67572 Sep 20 2022 uname -rwxr-xr-x 2 root root 2346 Apr 9 2022 uncompress -rwxr-xr-x 1 root root 133632 Sep 20 2022 vdir -rwxr-xr-x 1 root root 42608 Mar 22 21:02 wdctl lrwxrwxrwx 1 root root 8 Dec 19 01:33 ypdomainname -> hostname -rwxr-xr-x 1 root root 1984 Apr 9 2022 zcat -rwxr-xr-x 1 root root 1678 Apr 9 2022 zcmp -rwxr-xr-x 1 root root 6460 Apr 9 2022 zdiff -rwxr-xr-x 1 root root 29 Apr 9 2022 zegrep -rwxr-xr-x 1 root root 29 Apr 9 2022 zfgrep -rwxr-xr-x 1 root root 2081 Apr 9 2022 zforce -rwxr-xr-x 1 root root 8103 Apr 9 2022 zgrep -rwxr-xr-x 1 root root 2206 Apr 9 2022 zless -rwxr-xr-x 1 root root 1842 Apr 9 2022 zmore -rwxr-xr-x 1 root root 4577 Apr 9 2022 znew I: user script /srv/workspace/pbuilder/23384/tmp/hooks/D02_print_environment finished -> Attempting to satisfy build-dependencies -> Creating pbuilder-satisfydepends-dummy package Package: pbuilder-satisfydepends-dummy Version: 0.invalid.0 Architecture: armhf Maintainer: Debian Pbuilder Team Description: Dummy package to satisfy dependencies with aptitude - created by pbuilder This package was created automatically by pbuilder to satisfy the build-dependencies of the package being currently built. Depends: debhelper-compat (= 12), gettext, flex, m4 (>= 1.4-14), help2man dpkg-deb: building package 'pbuilder-satisfydepends-dummy' in '/tmp/satisfydepends-aptitude/pbuilder-satisfydepends-dummy.deb'. Selecting previously unselected package pbuilder-satisfydepends-dummy. (Reading database ... 19326 files and directories currently installed.) Preparing to unpack .../pbuilder-satisfydepends-dummy.deb ... Unpacking pbuilder-satisfydepends-dummy (0.invalid.0) ... dpkg: pbuilder-satisfydepends-dummy: dependency problems, but configuring anyway as you requested: pbuilder-satisfydepends-dummy depends on debhelper-compat (= 12); however: Package debhelper-compat is not installed. pbuilder-satisfydepends-dummy depends on gettext; however: Package gettext is not installed. pbuilder-satisfydepends-dummy depends on flex; however: Package flex is not installed. pbuilder-satisfydepends-dummy depends on m4 (>= 1.4-14); however: Package m4 is not installed. pbuilder-satisfydepends-dummy depends on help2man; however: Package help2man is not installed. Setting up pbuilder-satisfydepends-dummy (0.invalid.0) ... Reading package lists... Building dependency tree... Reading state information... Initializing package states... Writing extended state information... Building tag database... pbuilder-satisfydepends-dummy is already installed at the requested version (0.invalid.0) pbuilder-satisfydepends-dummy is already installed at the requested version (0.invalid.0) The following NEW packages will be installed: autoconf{a} automake{a} autopoint{a} autotools-dev{a} bsdextrautils{a} debhelper{a} dh-autoreconf{a} dh-strip-nondeterminism{a} dwz{a} file{a} flex{a} gettext{a} gettext-base{a} groff-base{a} help2man{a} intltool-debian{a} libarchive-zip-perl{a} libdebhelper-perl{a} libelf1{a} libfile-stripnondeterminism-perl{a} libicu72{a} liblocale-gettext-perl{a} libmagic-mgc{a} libmagic1{a} libpipeline1{a} libsub-override-perl{a} libtool{a} libuchardet0{a} libxml2{a} m4{a} man-db{a} po-debconf{a} sensible-utils{a} The following packages are RECOMMENDED but will NOT be installed: curl libarchive-cpio-perl libfl-dev libltdl-dev libmail-sendmail-perl lynx wget 0 packages upgraded, 33 newly installed, 0 to remove and 0 not upgraded. Need to get 18.7 MB of archives. After unpacking 69.6 MB will be used. Writing extended state information... Get: 1 http://deb.debian.org/debian bookworm/main armhf m4 armhf 1.4.19-3 [265 kB] Get: 2 http://deb.debian.org/debian bookworm/main armhf flex armhf 2.6.4-8.1 [424 kB] Get: 3 http://deb.debian.org/debian bookworm/main armhf liblocale-gettext-perl armhf 1.07-5 [14.9 kB] Get: 4 http://deb.debian.org/debian bookworm/main armhf sensible-utils all 0.0.17+nmu1 [19.0 kB] Get: 5 http://deb.debian.org/debian bookworm/main armhf libmagic-mgc armhf 1:5.44-3 [305 kB] Get: 6 http://deb.debian.org/debian bookworm/main armhf libmagic1 armhf 1:5.44-3 [96.5 kB] Get: 7 http://deb.debian.org/debian bookworm/main armhf file armhf 1:5.44-3 [41.6 kB] Get: 8 http://deb.debian.org/debian bookworm/main armhf gettext-base armhf 0.21-12 [157 kB] Get: 9 http://deb.debian.org/debian bookworm/main armhf libuchardet0 armhf 0.0.7-1 [65.0 kB] Get: 10 http://deb.debian.org/debian bookworm/main armhf groff-base armhf 1.22.4-10 [825 kB] Get: 11 http://deb.debian.org/debian bookworm/main armhf bsdextrautils armhf 2.38.1-5+b1 [78.6 kB] Get: 12 http://deb.debian.org/debian bookworm/main armhf libpipeline1 armhf 1.5.7-1 [33.6 kB] Get: 13 http://deb.debian.org/debian bookworm/main armhf man-db armhf 2.11.2-2 [1351 kB] Get: 14 http://deb.debian.org/debian bookworm/main armhf autoconf all 2.71-3 [332 kB] Get: 15 http://deb.debian.org/debian bookworm/main armhf autotools-dev all 20220109.1 [51.6 kB] Get: 16 http://deb.debian.org/debian bookworm/main armhf automake all 1:1.16.5-1.3 [823 kB] Get: 17 http://deb.debian.org/debian bookworm/main armhf autopoint all 0.21-12 [495 kB] Get: 18 http://deb.debian.org/debian bookworm/main armhf libdebhelper-perl all 13.11.4 [81.2 kB] Get: 19 http://deb.debian.org/debian bookworm/main armhf libtool all 2.4.7-5 [517 kB] Get: 20 http://deb.debian.org/debian bookworm/main armhf dh-autoreconf all 20 [17.1 kB] Get: 21 http://deb.debian.org/debian bookworm/main armhf libarchive-zip-perl all 1.68-1 [104 kB] Get: 22 http://deb.debian.org/debian bookworm/main armhf libsub-override-perl all 0.09-4 [9304 B] Get: 23 http://deb.debian.org/debian bookworm/main armhf libfile-stripnondeterminism-perl all 1.13.1-1 [19.4 kB] Get: 24 http://deb.debian.org/debian bookworm/main armhf dh-strip-nondeterminism all 1.13.1-1 [8620 B] Get: 25 http://deb.debian.org/debian bookworm/main armhf libelf1 armhf 0.188-2.1 [170 kB] Get: 26 http://deb.debian.org/debian bookworm/main armhf dwz armhf 0.15-1 [101 kB] Get: 27 http://deb.debian.org/debian bookworm/main armhf libicu72 armhf 72.1-3 [9048 kB] Get: 28 http://deb.debian.org/debian bookworm/main armhf libxml2 armhf 2.9.14+dfsg-1.2 [591 kB] Get: 29 http://deb.debian.org/debian bookworm/main armhf gettext armhf 0.21-12 [1229 kB] Get: 30 http://deb.debian.org/debian bookworm/main armhf intltool-debian all 0.35.0+20060710.6 [22.9 kB] Get: 31 http://deb.debian.org/debian bookworm/main armhf po-debconf all 1.0.21+nmu1 [248 kB] Get: 32 http://deb.debian.org/debian bookworm/main armhf debhelper all 13.11.4 [942 kB] Get: 33 http://deb.debian.org/debian bookworm/main armhf help2man armhf 1.49.3 [198 kB] Fetched 18.7 MB in 3s (6797 kB/s) debconf: delaying package configuration, since apt-utils is not installed Selecting previously unselected package m4. (Reading database ... (Reading database ... 5% (Reading database ... 10% (Reading database ... 15% (Reading database ... 20% (Reading database ... 25% (Reading database ... 30% (Reading database ... 35% (Reading database ... 40% (Reading database ... 45% (Reading database ... 50% (Reading database ... 55% (Reading database ... 60% (Reading database ... 65% (Reading database ... 70% (Reading database ... 75% (Reading database ... 80% (Reading database ... 85% (Reading database ... 90% (Reading database ... 95% (Reading database ... 100% (Reading database ... 19326 files and directories currently installed.) Preparing to unpack .../00-m4_1.4.19-3_armhf.deb ... Unpacking m4 (1.4.19-3) ... Selecting previously unselected package flex. Preparing to unpack .../01-flex_2.6.4-8.1_armhf.deb ... Unpacking flex (2.6.4-8.1) ... Selecting previously unselected package liblocale-gettext-perl. Preparing to unpack .../02-liblocale-gettext-perl_1.07-5_armhf.deb ... Unpacking liblocale-gettext-perl (1.07-5) ... Selecting previously unselected package sensible-utils. Preparing to unpack .../03-sensible-utils_0.0.17+nmu1_all.deb ... Unpacking sensible-utils (0.0.17+nmu1) ... Selecting previously unselected package libmagic-mgc. Preparing to unpack .../04-libmagic-mgc_1%3a5.44-3_armhf.deb ... Unpacking libmagic-mgc (1:5.44-3) ... Selecting previously unselected package libmagic1:armhf. Preparing to unpack .../05-libmagic1_1%3a5.44-3_armhf.deb ... Unpacking libmagic1:armhf (1:5.44-3) ... Selecting previously unselected package file. Preparing to unpack .../06-file_1%3a5.44-3_armhf.deb ... Unpacking file (1:5.44-3) ... Selecting previously unselected package gettext-base. Preparing to unpack .../07-gettext-base_0.21-12_armhf.deb ... Unpacking gettext-base (0.21-12) ... Selecting previously unselected package libuchardet0:armhf. Preparing to unpack .../08-libuchardet0_0.0.7-1_armhf.deb ... Unpacking libuchardet0:armhf (0.0.7-1) ... Selecting previously unselected package groff-base. Preparing to unpack .../09-groff-base_1.22.4-10_armhf.deb ... Unpacking groff-base (1.22.4-10) ... Selecting previously unselected package bsdextrautils. Preparing to unpack .../10-bsdextrautils_2.38.1-5+b1_armhf.deb ... Unpacking bsdextrautils (2.38.1-5+b1) ... Selecting previously unselected package libpipeline1:armhf. Preparing to unpack .../11-libpipeline1_1.5.7-1_armhf.deb ... Unpacking libpipeline1:armhf (1.5.7-1) ... Selecting previously unselected package man-db. Preparing to unpack .../12-man-db_2.11.2-2_armhf.deb ... Unpacking man-db (2.11.2-2) ... Selecting previously unselected package autoconf. Preparing to unpack .../13-autoconf_2.71-3_all.deb ... Unpacking autoconf (2.71-3) ... Selecting previously unselected package autotools-dev. Preparing to unpack .../14-autotools-dev_20220109.1_all.deb ... Unpacking autotools-dev (20220109.1) ... Selecting previously unselected package automake. Preparing to unpack .../15-automake_1%3a1.16.5-1.3_all.deb ... Unpacking automake (1:1.16.5-1.3) ... Selecting previously unselected package autopoint. Preparing to unpack .../16-autopoint_0.21-12_all.deb ... Unpacking autopoint (0.21-12) ... Selecting previously unselected package libdebhelper-perl. Preparing to unpack .../17-libdebhelper-perl_13.11.4_all.deb ... Unpacking libdebhelper-perl (13.11.4) ... Selecting previously unselected package libtool. Preparing to unpack .../18-libtool_2.4.7-5_all.deb ... Unpacking libtool (2.4.7-5) ... Selecting previously unselected package dh-autoreconf. Preparing to unpack .../19-dh-autoreconf_20_all.deb ... Unpacking dh-autoreconf (20) ... Selecting previously unselected package libarchive-zip-perl. Preparing to unpack .../20-libarchive-zip-perl_1.68-1_all.deb ... Unpacking libarchive-zip-perl (1.68-1) ... Selecting previously unselected package libsub-override-perl. Preparing to unpack .../21-libsub-override-perl_0.09-4_all.deb ... Unpacking libsub-override-perl (0.09-4) ... Selecting previously unselected package libfile-stripnondeterminism-perl. Preparing to unpack .../22-libfile-stripnondeterminism-perl_1.13.1-1_all.deb ... Unpacking libfile-stripnondeterminism-perl (1.13.1-1) ... Selecting previously unselected package dh-strip-nondeterminism. Preparing to unpack .../23-dh-strip-nondeterminism_1.13.1-1_all.deb ... Unpacking dh-strip-nondeterminism (1.13.1-1) ... Selecting previously unselected package libelf1:armhf. Preparing to unpack .../24-libelf1_0.188-2.1_armhf.deb ... Unpacking libelf1:armhf (0.188-2.1) ... Selecting previously unselected package dwz. Preparing to unpack .../25-dwz_0.15-1_armhf.deb ... Unpacking dwz (0.15-1) ... Selecting previously unselected package libicu72:armhf. Preparing to unpack .../26-libicu72_72.1-3_armhf.deb ... Unpacking libicu72:armhf (72.1-3) ... Selecting previously unselected package libxml2:armhf. Preparing to unpack .../27-libxml2_2.9.14+dfsg-1.2_armhf.deb ... Unpacking libxml2:armhf (2.9.14+dfsg-1.2) ... Selecting previously unselected package gettext. Preparing to unpack .../28-gettext_0.21-12_armhf.deb ... Unpacking gettext (0.21-12) ... Selecting previously unselected package intltool-debian. Preparing to unpack .../29-intltool-debian_0.35.0+20060710.6_all.deb ... Unpacking intltool-debian (0.35.0+20060710.6) ... Selecting previously unselected package po-debconf. Preparing to unpack .../30-po-debconf_1.0.21+nmu1_all.deb ... Unpacking po-debconf (1.0.21+nmu1) ... Selecting previously unselected package debhelper. Preparing to unpack .../31-debhelper_13.11.4_all.deb ... Unpacking debhelper (13.11.4) ... Selecting previously unselected package help2man. Preparing to unpack .../32-help2man_1.49.3_armhf.deb ... Unpacking help2man (1.49.3) ... Setting up libpipeline1:armhf (1.5.7-1) ... Setting up libicu72:armhf (72.1-3) ... Setting up bsdextrautils (2.38.1-5+b1) ... Setting up libmagic-mgc (1:5.44-3) ... Setting up libarchive-zip-perl (1.68-1) ... Setting up libdebhelper-perl (13.11.4) ... Setting up libmagic1:armhf (1:5.44-3) ... Setting up gettext-base (0.21-12) ... Setting up m4 (1.4.19-3) ... Setting up file (1:5.44-3) ... Setting up autotools-dev (20220109.1) ... Setting up autopoint (0.21-12) ... Setting up autoconf (2.71-3) ... Setting up sensible-utils (0.0.17+nmu1) ... Setting up libuchardet0:armhf (0.0.7-1) ... Setting up libsub-override-perl (0.09-4) ... Setting up libelf1:armhf (0.188-2.1) ... Setting up libxml2:armhf (2.9.14+dfsg-1.2) ... Setting up liblocale-gettext-perl (1.07-5) ... Setting up automake (1:1.16.5-1.3) ... update-alternatives: using /usr/bin/automake-1.16 to provide /usr/bin/automake (automake) in auto mode Setting up libfile-stripnondeterminism-perl (1.13.1-1) ... Setting up flex (2.6.4-8.1) ... Setting up gettext (0.21-12) ... Setting up libtool (2.4.7-5) ... Setting up intltool-debian (0.35.0+20060710.6) ... Setting up help2man (1.49.3) ... Setting up dh-autoreconf (20) ... Setting up dh-strip-nondeterminism (1.13.1-1) ... Setting up dwz (0.15-1) ... Setting up groff-base (1.22.4-10) ... Setting up po-debconf (1.0.21+nmu1) ... Setting up man-db (2.11.2-2) ... Not building database; man-db/auto-update is not 'true'. Setting up debhelper (13.11.4) ... Processing triggers for libc-bin (2.36-9) ... Reading package lists... Building dependency tree... Reading state information... Reading extended state information... Initializing package states... Writing extended state information... Building tag database... -> Finished parsing the build-deps Reading package lists... Building dependency tree... Reading state information... fakeroot is already the newest version (1.31-1.2). 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. I: Building the package I: Running cd /build/bison-3.8.2+dfsg/ && env PATH="/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" HOME="/nonexistent/first-build" dpkg-buildpackage -us -uc -b && env PATH="/usr/sbin:/usr/bin:/sbin:/bin:/usr/games" HOME="/nonexistent/first-build" dpkg-genchanges -S > ../bison_3.8.2+dfsg-1_source.changes dpkg-buildpackage: info: source package bison dpkg-buildpackage: info: source version 2:3.8.2+dfsg-1 dpkg-buildpackage: info: source distribution unstable dpkg-buildpackage: info: source changed by Chuan-kai Lin dpkg-source --before-build . dpkg-buildpackage: info: host architecture armhf fakeroot debian/rules clean dh clean debian/rules override_dh_auto_clean make[1]: Entering directory '/build/bison-3.8.2+dfsg' [ ! -f /build/bison-3.8.2+dfsg/Makefile ] || dh_auto_clean make[1]: Leaving directory '/build/bison-3.8.2+dfsg' dh_clean debian/rules build dh build dh_update_autotools_config debian/rules execute_before_dh_autoreconf make[1]: Entering directory '/build/bison-3.8.2+dfsg' echo '@setfilename bison.info' > /build/bison-3.8.2+dfsg/doc/bison.texi make[1]: Leaving directory '/build/bison-3.8.2+dfsg' dh_autoreconf Copying file build-aux/config.rpath Copying file m4/extern-inline.m4 Copying file m4/glibc2.m4 Copying file m4/glibc21.m4 Copying file m4/intdiv0.m4 Copying file m4/intl.m4 Copying file m4/intldir.m4 Copying file m4/intmax.m4 Copying file m4/inttypes-pri.m4 Copying file m4/lcmessage.m4 Copying file m4/longlong.m4 Copying file m4/printf-posix.m4 Copying file m4/uintmax_t.m4 Copying file gnulib-po/Makevars.template Copying file runtime-po/Makevars.template Copying file po/Makevars.template Copying file gnulib-po/remove-potcdate.sin debian/rules override_dh_auto_configure make[1]: Entering directory '/build/bison-3.8.2+dfsg' dh_auto_configure -- --disable-silent-rules ./configure --build=arm-linux-gnueabihf --prefix=/usr --includedir=\${prefix}/include --mandir=\${prefix}/share/man --infodir=\${prefix}/share/info --sysconfdir=/etc --localstatedir=/var --disable-option-checking --disable-silent-rules --libdir=\${prefix}/lib/arm-linux-gnueabihf --runstatedir=/run --disable-maintainer-mode --disable-dependency-tracking --disable-silent-rules checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for a race-free mkdir -p... /bin/mkdir -p checking for gawk... no checking for mawk... mawk checking whether make sets $(MAKE)... yes checking whether make supports nested variables... yes checking whether make supports nested variables... (cached) yes checking for gcc... gcc checking whether the C compiler works... yes checking for C compiler default output file name... a.out checking for suffix of executables... checking whether we are cross compiling... no checking for suffix of object files... o checking whether the compiler supports GNU C... yes checking whether gcc accepts -g... yes checking for gcc option to enable C11 features... none needed checking whether gcc understands -c and -o together... yes checking whether the compiler is clang... no checking for compiler option needed when checking for declarations... none checking whether make supports the include directive... yes (GNU style) checking dependency style of gcc... none checking for g++... g++ checking whether the compiler supports GNU C++... yes checking whether g++ accepts -g... yes checking for g++ option to enable C++11 features... none needed checking dependency style of g++... none checking for stdio.h... yes checking for stdlib.h... yes checking for string.h... yes checking for inttypes.h... yes checking for stdint.h... yes checking for strings.h... yes checking for sys/stat.h... yes checking for sys/types.h... yes checking for unistd.h... yes checking for wchar.h... yes checking for minix/config.h... no checking for locale.h... yes checking for sys/param.h... yes checking for sys/socket.h... yes checking for stdio_ext.h... yes checking for features.h... yes checking for getopt.h... yes checking for sys/cdefs.h... yes checking for sys/time.h... yes checking for iconv.h... yes checking for limits.h... yes checking for crtdefs.h... no checking for wctype.h... yes checking for threads.h... yes checking for math.h... yes checking for sys/mman.h... yes checking for spawn.h... yes checking for sys/ioctl.h... yes checking for sys/resource.h... yes checking for sys/times.h... yes checking for sys/wait.h... yes checking for termios.h... yes checking for dirent.h... yes checking for xlocale.h... no checking whether it is safe to define __EXTENSIONS__... yes checking whether _XOPEN_SOURCE should be defined... no checking build system type... arm-unknown-linux-gnueabihf checking host system type... arm-unknown-linux-gnueabihf checking how to run the C preprocessor... gcc -E checking for grep that handles long lines and -e... /bin/grep checking for egrep... /bin/grep -E checking for Minix Amsterdam compiler... no checking for ar... ar checking for ranlib... ranlib checking for special C compiler options needed for large files... no checking for _FILE_OFFSET_BITS value needed for large files... 64 checking for time_t past the year 2038... no checking for 64-bit time_t with _TIME_BITS=64... yes checking for ld used by gcc... /usr/bin/ld checking if the linker (/usr/bin/ld) is GNU ld... yes checking for shared library run path origin... done checking 32-bit host C ABI... yes checking for ELF binary format... yes checking for the common suffixes of directories in the library search path... lib,lib,lib checking for libtextstyle... no checking for inline... inline checking for tcdrain... yes checking for canonicalize_file_name... yes checking for faccessat... yes checking for realpath... yes checking for lstat... yes checking for readlinkat... yes checking for _set_invalid_parameter_handler... no checking for fchdir... yes checking for fcntl... yes checking for symlink... yes checking for ffsl... yes checking for vasnprintf... no checking for snprintf... yes checking for fsync... yes checking for microuptime... no checking for nanouptime... no checking for getprogname... no checking for getexecname... no checking for getrusage... yes checking for gettimeofday... yes checking for iswcntrl... yes checking for iswblank... yes checking for mbsinit... yes checking for mbrtowc... yes checking for isascii... yes checking for mprotect... yes checking for obstack_printf... yes checking for strerror_r... yes checking for __xpg_strerror_r... yes checking for pipe... yes checking for pipe2... yes checking for posix_spawn_file_actions_addchdir_np... yes checking for posix_spawn_file_actions_addchdir... no checking for readlink... yes checking for setenv... yes checking for link... yes checking for sigaction... yes checking for sigaltstack... yes checking for siginterrupt... yes checking for stpncpy... yes checking for strndup... yes checking for wcwidth... yes checking for fdopendir... yes checking for mempcpy... yes checking for __fseterr... no checking for fstatat... yes checking for getdelim... yes checking for getdtablesize... yes checking for openat... yes checking for catgets... yes checking for setlocale... yes checking whether pragma GCC diagnostic push works... yes checking whether C++ compiler handles -Werror -Wunknown-warning-option... no checking whether C++ compiler handles -fno-exceptions... yes checking whether C++ compiler accepts -std=c++98... yes checking whether C++ compiler accepts -std=c++03... yes checking whether C++ compiler accepts -std=c++11... yes checking whether C++ compiler accepts -std=c++14... yes checking whether C++ compiler accepts -std=c++17... yes checking whether C++ compiler accepts -std=c++20... yes checking whether C++ compiler accepts -std=c++2b... yes checking whether gcc supports POSIXLY_CORRECT=1... yes checking whether g++ builds executables that work... yes checking whether g++ supports POSIXLY_CORRECT=1... yes checking for dmd... no checking for -g... no checking for Java compiler... no checking for Java virtual machine... no checking for flex... flex checking whether lex is flex... yes checking whether flex supports --header=FILE... yes checking lex output file root... lex.yy checking lex library... none needed checking whether yytext is a pointer... no checking for bison... no checking for byacc... no checking for ranlib... (cached) ranlib checking for GNU M4 that supports accurate traces... /usr/bin/m4 checking whether /usr/bin/m4 accepts --gnu... yes checking how m4 supports trace files... --debugfile checking for perl... /usr/bin/perl checking for xsltproc... no checking for inline... (cached) inline checking for size_t... yes checking for working alloca.h... yes checking for alloca... yes checking whether malloc is ptrdiff_t safe... yes checking whether malloc, realloc, calloc set errno on failure... yes checking whether lstat correctly handles trailing slash... yes checking whether // is distinct from /... no checking whether realpath works... yes checking for getcwd... yes checking for C/C++ restrict keyword... __restrict__ checking if environ is properly declared... yes checking whether the preprocessor supports include_next... yes checking whether source code line length is unlimited... yes checking for complete errno.h... yes checking for gcc options needed to detect all undeclared functions... none needed checking whether strerror_r is declared... yes checking whether strerror_r returns char *... yes checking for mode_t... yes checking for sig_atomic_t... yes checking for working fcntl.h... yes checking for pid_t... yes checking for eaccess... yes checking for stdint.h... yes checking for inttypes.h... yes checking whether printf supports size specifiers as in C99... yes checking whether printf supports 'long double' arguments... yes checking whether printf supports infinite 'double' arguments... yes checking whether byte ordering is bigendian... no checking whether long double and double are the same... yes checking whether printf supports infinite 'long double' arguments... yes checking whether printf supports the 'a' and 'A' directives... yes checking whether printf supports the 'F' directive... yes checking whether printf supports the 'n' directive... no checking whether printf supports the 'ls' directive... yes checking whether printf supports POSIX/XSI format strings with positions... yes checking whether printf supports the grouping flag... yes checking whether printf supports the left-adjust flag correctly... yes checking whether printf supports the zero flag correctly... yes checking whether printf supports large precisions... yes checking whether the compiler produces multi-arch binaries... no checking whether printf survives out-of-memory conditions... yes checking for wchar_t... yes checking for wint_t... yes checking whether wint_t is large enough... yes checking for intmax_t... yes checking where to find the exponent in a 'double'... word 1 bit 20 checking whether snprintf returns a byte count as in C99... yes checking whether snprintf truncates the result as in C99... yes checking for snprintf... (cached) yes checking for strnlen... yes checking for wcslen... yes checking for wcsnlen... yes checking for mbrtowc... (cached) yes checking for wcrtomb... yes checking whether _snprintf is declared... no checking whether frexp() can be used without linking with libm... yes checking whether alarm is declared... yes checking whether getcwd (NULL, 0) allocates memory for result... yes checking for getcwd with POSIX signature... yes checking whether getcwd is declared... yes checking for d_ino member in directory struct... yes checking for arithmetic hrtime_t... no checking for getopt.h... (cached) yes checking for getopt_long_only... yes checking whether getopt is POSIX compatible... yes checking for working GNU getopt function... yes checking for working GNU getopt_long function... yes checking for struct timeval... yes checking for wide-enough struct timeval.tv_sec member... yes checking for iconv... yes checking for working iconv... yes checking whether iconv is compatible with its POSIX signature... yes checking whether limits.h has WORD_BIT, BOOL_WIDTH etc.... yes checking whether stdint.h conforms to C99... yes checking whether stdint.h works without ISO C predefines... yes checking whether stdint.h has UINTMAX_WIDTH etc.... yes checking whether INT32_MAX < INTMAX_MAX... yes checking whether INT64_MAX == LONG_MAX... no checking whether UINT32_MAX < UINTMAX_MAX... yes checking whether UINT64_MAX == ULONG_MAX... no checking where to find the exponent in a 'float'... word 0 bit 23 checking whether isnan(float) can be used without linking with libm... yes checking whether isnan(float) works... yes checking whether isnan(double) can be used without linking with libm... yes checking whether isnan(long double) can be used without linking with libm... yes checking whether isnanl works... yes checking whether iswcntrl works... yes checking for towlower... yes checking for wctype_t... yes checking for wctrans_t... yes checking for nl_langinfo and CODESET... yes checking for a traditional french locale... fr_FR checking for a traditional japanese locale... ja_JP.EUC-JP checking for a french Unicode locale... fr_FR.UTF-8 checking for a transitional chinese locale... zh_CN.GB18030 checking whether ldexp() can be used without linking with libm... yes checking whether imported symbols can be declared weak... yes checking for pthread.h... yes checking for pthread_kill in -lpthread... yes checking whether POSIX threads API is available... yes checking for multithread API to use... posix checking for a sed that does not truncate output... /bin/sed checking whether NAN macro works... yes checking whether HUGE_VAL works... yes checking for mbstate_t... yes checking for mmap... yes checking for MAP_ANONYMOUS... yes checking whether memchr works... yes checking whether defines MIN and MAX... no checking whether defines MIN and MAX... yes checking whether obstack_printf is declared... yes checking for O_CLOEXEC... yes checking for promoted mode_t type... mode_t checking whether strerror(0) succeeds... yes checking for strerror_r with POSIX signature... no checking whether __xpg_strerror_r works... yes checking for library containing posix_spawn... none required checking for posix_spawn... yes checking whether posix_spawn is declared... yes checking whether posix_spawn works... yes checking whether posix_spawn rejects scripts without shebang... yes checking whether posix_spawnp rejects scripts without shebang... yes checking whether posix_spawnattr_setschedpolicy is supported... yes checking whether posix_spawnattr_setschedparam is supported... yes checking for ptrdiff_t... yes checking whether C symbols are prefixed with underscore at the linker level... no checking for sigset_t... yes checking for shared library path variable... LD_LIBRARY_PATH checking whether to activate relocatable installation... no checking whether malloc (0) returns nonnull... yes checking whether setenv is declared... yes checking for sched.h... yes checking for struct sched_param... yes checking for uid_t in sys/types.h... yes checking for volatile sig_atomic_t... yes checking for sighandler_t... yes checking whether snprintf is declared... yes checking for posix_spawnattr_t... yes checking for posix_spawn_file_actions_t... yes checking whether stat file-mode macros are broken... no checking for nlink_t... yes checking for stdbool.h that conforms to C99... yes checking for _Bool... yes checking for good max_align_t... yes checking whether NULL can be used in arbitrary expressions... yes checking whether fcloseall is declared... yes checking which flavor of printf attribute matches inttypes macros... system checking whether ecvt is declared... yes checking whether fcvt is declared... yes checking whether gcvt is declared... yes checking whether stpncpy is declared... yes checking whether strdup is declared... yes checking whether strndup is declared... yes checking whether declares ioctl... yes checking for struct tms... yes checking for struct timespec in ... yes checking for TIME_UTC in ... yes checking whether execvpe is declared... yes checking whether clearerr_unlocked is declared... yes checking whether feof_unlocked is declared... yes checking whether ferror_unlocked is declared... yes checking whether fflush_unlocked is declared... yes checking whether fgets_unlocked is declared... yes checking whether fputc_unlocked is declared... yes checking whether fputs_unlocked is declared... yes checking whether fread_unlocked is declared... yes checking whether fwrite_unlocked is declared... yes checking whether getc_unlocked is declared... yes checking whether getchar_unlocked is declared... yes checking whether putc_unlocked is declared... yes checking whether putchar_unlocked is declared... yes checking whether unsetenv is declared... yes checking whether vsnprintf is declared... yes checking whether uses 'inline' correctly... yes checking whether wcsdup is declared... yes checking POSIX termios... yes checking whether use of TIOCGWINSZ requires termios.h... no checking whether fchdir is declared... yes checking whether getdelim is declared... yes checking whether getdtablesize is declared... yes checking whether getline is declared... yes checking whether setlocale (LC_ALL, NULL) is multithread-safe... yes checking whether setlocale (category, NULL) is multithread-safe... yes checking whether locale.h defines locale_t... yes checking whether locale.h conforms to POSIX:2001... yes checking whether struct lconv is properly defined... yes checking whether memrchr is declared... yes checking whether strnlen is declared... yes checking for alloca as a compiler built-in... yes checking whether to enable assertions... yes checking for __builtin_expect... yes checking whether calloc (0, n) and calloc (n, 0) return nonnull... yes checking for library containing clock_gettime... none required checking for clock_gettime... yes checking for clock_settime... yes checking whether // is distinct from /... (cached) no checking whether dup2 works... yes checking for error_at_line... yes checking whether fcntl handles F_DUPFD correctly... yes checking whether fcntl understands F_DUPFD_CLOEXEC... needs runtime check checking whether conversion from 'int' to 'long double' works... yes checking whether fopen recognizes a trailing slash... yes checking whether fopen supports the mode character 'x'... yes checking whether fopen supports the mode character 'e'... yes checking for __fpending... yes checking whether __fpending is declared... yes checking for ptrdiff_t... (cached) yes checking whether free is known to preserve errno... yes checking whether frexp works... yes checking whether frexpl is declared... yes checking whether frexpl() can be used without linking with libm... yes checking whether frexpl works... yes checking whether getcwd handles long file names properly... yes checking for getpagesize... yes checking whether getcwd succeeds when 4k < cwd_length < 16k... yes checking whether gethrtime is declared... no checking whether CLOCK_MONOTONIC or CLOCK_REALTIME is defined... yes checking whether program_invocation_name is declared... yes checking whether program_invocation_short_name is declared... yes checking whether __argv is declared... no checking for gettimeofday with POSIX signature... yes checking whether the compiler generally respects inline... yes checking whether isnan macro works... yes checking whether isnan(double) can be used without linking with libm... (cached) yes checking whether isnan(double) can be used without linking with libm... (cached) yes checking whether isnan(float) can be used without linking with libm... (cached) yes checking whether isnan(float) works... (cached) yes checking whether isnan(long double) can be used without linking with libm... (cached) yes checking whether isnanl works... (cached) yes checking whether isnan(long double) can be used without linking with libm... (cached) yes checking whether isnanl works... (cached) yes checking whether iswblank is declared... yes checking whether iswdigit is ISO C compliant... yes checking whether iswxdigit is ISO C compliant... yes checking whether ldexpl is declared... yes checking whether ldexpl() can be used without linking with libm... yes checking whether ldexpl works... yes checking whether the compiler supports the __inline keyword... yes checking for pthread_rwlock_t... yes checking whether pthread_rwlock_rdlock prefers a writer to a reader... no checking whether malloc (0) returns nonnull... (cached) yes checking whether mbrtowc handles incomplete characters... yes checking whether mbrtowc works as well as mbtowc... yes checking whether mbrtowc handles a NULL pwc argument... yes checking whether mbrtowc handles a NULL string argument... yes checking whether mbrtowc has a correct return value... yes checking whether mbrtowc returns 0 when parsing a NUL character... yes checking whether mbrtowc stores incomplete characters... no checking whether mbrtowc works on empty input... yes checking whether the C locale is free of encoding errors... no checking whether mbrtowc handles incomplete characters... (cached) yes checking whether mbrtowc works as well as mbtowc... (cached) yes checking whether mbswidth is declared in ... no checking for mbstate_t... (cached) yes checking for mempcpy... (cached) yes checking for obstacks that work with any size object... no checking whether open recognizes a trailing slash... yes checking whether perror matches strerror... yes checking whether posix_spawn_file_actions_addclose works... yes checking whether posix_spawn_file_actions_adddup2 works... yes checking whether posix_spawn_file_actions_addopen works... yes checking whether frexp works... (cached) yes checking whether ldexp can be used without linking with libm... (cached) yes checking whether frexpl() can be used without linking with libm... (cached) yes checking whether frexpl works... (cached) yes checking whether frexpl is declared... (cached) yes checking whether ldexpl() can be used without linking with libm... (cached) yes checking whether ldexpl works... (cached) yes checking whether ldexpl is declared... (cached) yes checking whether program_invocation_name is declared... (cached) yes checking whether program_invocation_short_name is declared... (cached) yes checking for raise... yes checking for sigprocmask... yes checking for rawmemchr... yes checking for readline... no checking for readline/readline.h... no checking for readline/history.h... no checking whether readlink signature is correct... yes checking whether readlink handles trailing slash correctly... yes checking whether readlink truncates results correctly... yes checking whether realloc (0, 0) returns nonnull... yes checking for reallocarray... yes checking for getcwd... (cached) yes checking whether free is known to preserve errno... (cached) yes checking for mempcpy... (cached) yes checking for rawmemchr... (cached) yes checking for search.h... yes checking for tsearch... yes checking whether rename honors trailing slash on destination... yes checking whether rename honors trailing slash on source... yes checking whether rename manages hard links correctly... yes checking whether rename manages existing destinations correctly... yes checking for struct sigaction.sa_sigaction... yes checking for signbit macro... yes checking for signbit compiler built-ins... yes checking for sigprocmask... (cached) yes checking for stdint.h... (cached) yes checking for SIZE_MAX... yes checking for snprintf... (cached) yes checking whether snprintf respects a size of 1... yes checking whether printf supports POSIX/XSI format strings with positions... (cached) yes checking for snprintf... (cached) yes checking whether snprintf fully supports the 'n' directive... no checking whether snprintf respects a size of 1... (cached) yes checking whether vsnprintf respects a zero size as in C99... yes checking for ptrdiff_t... (cached) yes checking for ptrdiff_t... (cached) yes checking for ssize_t... yes checking whether stat handles trailing slashes on files... yes checking for working stdalign.h... yes checking for stpcpy... yes checking for working stpncpy... yes checking for working strerror function... yes checking for working strndup... yes checking whether strtod obeys C99... yes checking for strverscmp... yes checking for sys/single_threaded.h... yes checking whether unlink honors trailing slashes... yes checking whether unlink of a parent directory fails as it should... guessing yes checking for unsetenv... yes checking for unsetenv() return type... int checking whether unsetenv obeys POSIX... yes checking for ptrdiff_t... (cached) yes checking for vasprintf... yes checking for vasprintf... (cached) yes checking for ptrdiff_t... (cached) yes checking for vsnprintf... yes checking whether snprintf respects a size of 1... (cached) yes checking whether printf supports POSIX/XSI format strings with positions... (cached) yes checking for vsnprintf... (cached) yes checking whether snprintf fully supports the 'n' directive... (cached) no checking whether snprintf respects a size of 1... (cached) yes checking whether vsnprintf respects a zero size as in C99... (cached) yes checking for ptrdiff_t... (cached) yes checking for ptrdiff_t... (cached) yes checking for waitid... yes checking whether wcwidth is declared... yes checking whether wcwidth works reasonably in UTF-8 locales... yes checking whether use of TIOCGWINSZ requires sys/ioctl.h... yes checking whether use of TIOCGWINSZ requires termios.h... (cached) no checking whether use of struct winsize requires sys/ptem.h... no checking for stdint.h... (cached) yes checking whether getdtablesize works... yes checking whether setlocale (LC_ALL, NULL) is multithread-safe... (cached) yes checking whether setlocale (category, NULL) is multithread-safe... (cached) yes checking for ptrdiff_t... (cached) yes checking for getline... yes checking for working getline function... yes checking whether NLS is requested... yes checking for msgfmt... /usr/bin/msgfmt checking for gmsgfmt... /usr/bin/msgfmt checking for xgettext... /usr/bin/xgettext checking for msgmerge... /usr/bin/msgmerge checking for CFPreferencesCopyAppValue... no checking for CFLocaleCopyCurrent... no checking for GNU gettext in libc... yes checking whether to use NLS... yes checking where the gettext function comes from... libc checking for valgrind... no checking Valgrind suppression file... checking that generated files are newer than configure... done configure: creating ./config.status config.status: creating src/yacc config.status: creating javacomp.sh config.status: creating javaexec.sh config.status: creating gnulib-po/Makefile.in config.status: creating runtime-po/Makefile.in config.status: creating etc/bench.pl config.status: creating tests/atlocal config.status: creating tests/bison config.status: creating Makefile config.status: creating po/Makefile.in config.status: creating doc/yacc.1 config.status: creating lib/config.h config.status: executing depfiles commands config.status: executing po-directories commands config.status: creating gnulib-po/POTFILES config.status: creating gnulib-po/Makefile config.status: creating runtime-po/POTFILES config.status: creating runtime-po/Makefile config.status: creating po/POTFILES config.status: creating po/Makefile config.status: executing tests/atconfig commands make[1]: Leaving directory '/build/bison-3.8.2+dfsg' debian/rules execute_before_dh_auto_build make[1]: Entering directory '/build/bison-3.8.2+dfsg' # Touch all files in the dependency tree of bison.info to # inhibit makeinfo invocation in the build process. install -d /build/bison-3.8.2+dfsg/doc/figs touch --date="Jan 01 2000" \ /build/bison-3.8.2+dfsg/doc/bison.info \ /build/bison-3.8.2+dfsg/doc/bison.help \ /build/bison-3.8.2+dfsg/doc/figs/example.txt \ /build/bison-3.8.2+dfsg/doc/figs/example-reduce.txt \ /build/bison-3.8.2+dfsg/doc/figs/example-shift.txt \ /build/bison-3.8.2+dfsg/doc/*.texi \ /build/bison-3.8.2+dfsg/build-aux/cross-options.pl \ /build/bison-3.8.2+dfsg/src/getargs.c make[1]: Leaving directory '/build/bison-3.8.2+dfsg' dh_auto_build make -j3 make[1]: Entering directory '/build/bison-3.8.2+dfsg' rm -f examples/c/reccalc/scan.stamp examples/c/reccalc/scan.stamp.tmp rm -f lib/alloca.h-t lib/alloca.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ sed -e 's|@''HAVE_ALLOCA_H''@|1|g' < ./lib/alloca.in.h; \ } > lib/alloca.h-t && \ mv -f lib/alloca.h-t lib/alloca.h /bin/mkdir -p examples/c/reccalc rm -f lib/configmake.h-t && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ echo '#if HAVE_WINSOCK2_H'; \ echo '# include /* avoid mingw pollution on DATADIR */'; \ echo '#endif'; \ echo '#define PREFIX "/usr"'; \ echo '#define EXEC_PREFIX "/usr"'; \ echo '#define BINDIR "/usr/bin"'; \ echo '#define SBINDIR "/usr/sbin"'; \ echo '#define LIBEXECDIR "/usr/libexec"'; \ echo '#define DATAROOTDIR "/usr/share"'; \ echo '#define DATADIR "/usr/share"'; \ echo '#define SYSCONFDIR "/etc"'; \ echo '#define SHAREDSTATEDIR "/usr/com"'; \ echo '#define LOCALSTATEDIR "/var"'; \ echo '#define RUNSTATEDIR "/run"'; \ echo '#define INCLUDEDIR "/usr/include"'; \ echo '#define OLDINCLUDEDIR "/usr/include"'; \ echo '#define DOCDIR "/usr/share/doc/bison"'; \ echo '#define INFODIR "/usr/share/info"'; \ echo '#define HTMLDIR "/usr/share/doc/bison"'; \ echo '#define DVIDIR "/usr/share/doc/bison"'; \ echo '#define PDFDIR "/usr/share/doc/bison"'; \ echo '#define PSDIR "/usr/share/doc/bison"'; \ echo '#define LIBDIR "/usr/lib/arm-linux-gnueabihf"'; \ echo '#define LISPDIR "/usr/share/emacs/site-lisp"'; \ echo '#define LOCALEDIR "/usr/share/locale"'; \ echo '#define MANDIR "/usr/share/man"'; \ echo '#define MANEXT ""'; \ echo '#define PKGDATADIR "/usr/share/bison"'; \ echo '#define PKGINCLUDEDIR "/usr/include/bison"'; \ echo '#define PKGLIBDIR "/usr/lib/arm-linux-gnueabihf/bison"'; \ echo '#define PKGLIBEXECDIR "/usr/libexec/bison"'; \ } | sed '/""/d' > lib/configmake.h-t && \ mv -f lib/configmake.h-t lib/configmake.h touch examples/c/reccalc/scan.stamp.tmp flex -oexamples/c/reccalc/scan.c --header=examples/c/reccalc/scan.h ./examples/c/reccalc/scan.l rm -f lib/fcntl.h-t lib/fcntl.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_FCNTL_H''@||g' \ -e 's/@''GNULIB_CREAT''@/0/g' \ -e 's/@''GNULIB_FCNTL''@/1/g' \ -e 's/@''GNULIB_NONBLOCKING''@/0/g' \ -e 's/@''GNULIB_OPEN''@/1/g' \ -e 's/@''GNULIB_OPENAT''@/0/g' \ -e 's/@''GNULIB_MDA_CREAT''@/1/g' \ -e 's/@''GNULIB_MDA_OPEN''@/1/g' \ -e 's|@''HAVE_FCNTL''@|1|g' \ -e 's|@''HAVE_OPENAT''@|1|g' \ -e 's|@''REPLACE_CREAT''@|0|g' \ -e 's|@''REPLACE_FCNTL''@|1|g' \ -e 's|@''REPLACE_OPEN''@|0|g' \ -e 's|@''REPLACE_OPENAT''@|0|g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_ARG_NONNULL/r ./lib/arg-nonnull.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h' \ < ./lib/fcntl.in.h; \ } > lib/fcntl.h-t && \ mv lib/fcntl.h-t lib/fcntl.h rm -f lib/iconv.h-t lib/iconv.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */' && \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_ICONV_H''@||g' \ -e 's/@''GNULIB_ICONV''@/1/g' \ -e 's|@''ICONV_CONST''@||g' \ -e 's|@''REPLACE_ICONV''@|0|g' \ -e 's|@''REPLACE_ICONV_OPEN''@|0|g' \ -e 's|@''REPLACE_ICONV_UTF''@|0|g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_ARG_NONNULL/r ./lib/arg-nonnull.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h' \ < ./lib/iconv.in.h; \ } > lib/iconv.h-t && \ mv lib/iconv.h-t lib/iconv.h rm -f lib/inttypes.h-t lib/inttypes.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ sed -e 's/@''HAVE_INTTYPES_H''@/1/g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_INTTYPES_H''@||g' \ -e 's/@''APPLE_UNIVERSAL_BUILD''@/0/g' \ -e 's/@''PRIPTR_PREFIX''@/""/g' \ -e 's/@''GNULIB_IMAXABS''@/0/g' \ -e 's/@''GNULIB_IMAXDIV''@/0/g' \ -e 's/@''GNULIB_STRTOIMAX''@/0/g' \ -e 's/@''GNULIB_STRTOUMAX''@/0/g' \ -e 's/@''HAVE_DECL_IMAXABS''@/1/g' \ -e 's/@''HAVE_DECL_IMAXDIV''@/1/g' \ -e 's/@''HAVE_DECL_STRTOIMAX''@/1/g' \ -e 's/@''HAVE_DECL_STRTOUMAX''@/1/g' \ -e 's/@''HAVE_IMAXDIV_T''@/1/g' \ -e 's/@''REPLACE_STRTOIMAX''@/0/g' \ -e 's/@''REPLACE_STRTOUMAX''@/0/g' \ -e 's/@''INT32_MAX_LT_INTMAX_MAX''@/1/g' \ -e 's/@''INT64_MAX_EQ_LONG_MAX''@/0/g' \ -e 's/@''UINT32_MAX_LT_UINTMAX_MAX''@/1/g' \ -e 's/@''UINT64_MAX_EQ_ULONG_MAX''@/0/g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_ARG_NONNULL/r ./lib/arg-nonnull.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h' \ < ./lib/inttypes.in.h; \ } > lib/inttypes.h-t && \ mv lib/inttypes.h-t lib/inttypes.h mv examples/c/reccalc/scan.stamp.tmp examples/c/reccalc/scan.stamp rm -f lib/textstyle.h-t lib/textstyle.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ cat ./lib/textstyle.in.h; \ } > lib/textstyle.h-t && \ mv lib/textstyle.h-t lib/textstyle.h rm -f lib/limits.h-t lib/limits.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */' && \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_LIMITS_H''@||g' \ < ./lib/limits.in.h; \ } > lib/limits.h-t && \ mv lib/limits.h-t lib/limits.h rm -f lib/locale.h-t lib/locale.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */' && \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_LOCALE_H''@||g' \ -e 's/@''GNULIB_LOCALECONV''@/0/g' \ -e 's/@''GNULIB_SETLOCALE''@/0/g' \ -e 's/@''GNULIB_SETLOCALE_NULL''@/1/g' \ -e 's/@''GNULIB_DUPLOCALE''@/0/g' \ -e 's/@''GNULIB_LOCALENAME''@/0/g' \ -e 's|@''HAVE_NEWLOCALE''@|1|g' \ -e 's|@''HAVE_DUPLOCALE''@|1|g' \ -e 's|@''HAVE_FREELOCALE''@|1|g' \ -e 's|@''HAVE_XLOCALE_H''@|0|g' \ -e 's|@''REPLACE_LOCALECONV''@|0|g' \ -e 's|@''REPLACE_SETLOCALE''@|0|g' \ -e 's|@''REPLACE_NEWLOCALE''@|0|g' \ -e 's|@''REPLACE_DUPLOCALE''@|0|g' \ -e 's|@''REPLACE_FREELOCALE''@|0|g' \ -e 's|@''REPLACE_STRUCT_LCONV''@|0|g' \ -e 's|@''LOCALENAME_ENHANCE_LOCALE_FUNCS''@|0|g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_ARG_NONNULL/r ./lib/arg-nonnull.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h' \ < ./lib/locale.in.h; \ } > lib/locale.h-t && \ mv lib/locale.h-t lib/locale.h rm -f lib/math.h-t lib/math.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */' && \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''INCLUDE_NEXT_AS_FIRST_DIRECTIVE''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_AS_FIRST_DIRECTIVE_MATH_H''@||g' \ -e 's/@''GNULIB_ACOSF''@/0/g' \ -e 's/@''GNULIB_ACOSL''@/0/g' \ -e 's/@''GNULIB_ASINF''@/0/g' \ -e 's/@''GNULIB_ASINL''@/0/g' \ -e 's/@''GNULIB_ATANF''@/0/g' \ -e 's/@''GNULIB_ATANL''@/0/g' \ -e 's/@''GNULIB_ATAN2F''@/0/g' \ -e 's/@''GNULIB_CBRT''@/0/g' \ -e 's/@''GNULIB_CBRTF''@/0/g' \ -e 's/@''GNULIB_CBRTL''@/0/g' \ -e 's/@''GNULIB_CEIL''@/0/g' \ -e 's/@''GNULIB_CEILF''@/0/g' \ -e 's/@''GNULIB_CEILL''@/0/g' \ -e 's/@''GNULIB_COPYSIGN''@/0/g' \ -e 's/@''GNULIB_COPYSIGNF''@/0/g' \ -e 's/@''GNULIB_COPYSIGNL''@/0/g' \ -e 's/@''GNULIB_COSF''@/0/g' \ -e 's/@''GNULIB_COSL''@/0/g' \ -e 's/@''GNULIB_COSHF''@/0/g' \ -e 's/@''GNULIB_EXPF''@/0/g' \ -e 's/@''GNULIB_EXPL''@/0/g' \ -e 's/@''GNULIB_EXP2''@/0/g' \ -e 's/@''GNULIB_EXP2F''@/0/g' \ -e 's/@''GNULIB_EXP2L''@/0/g' \ -e 's/@''GNULIB_EXPM1''@/0/g' \ -e 's/@''GNULIB_EXPM1F''@/0/g' \ -e 's/@''GNULIB_EXPM1L''@/0/g' \ -e 's/@''GNULIB_FABSF''@/0/g' \ -e 's/@''GNULIB_FABSL''@/0/g' \ -e 's/@''GNULIB_FLOOR''@/0/g' \ -e 's/@''GNULIB_FLOORF''@/0/g' \ -e 's/@''GNULIB_FLOORL''@/0/g' \ -e 's/@''GNULIB_FMA''@/0/g' \ -e 's/@''GNULIB_FMAF''@/0/g' \ -e 's/@''GNULIB_FMAL''@/0/g' \ -e 's/@''GNULIB_FMOD''@/0/g' \ -e 's/@''GNULIB_FMODF''@/0/g' \ -e 's/@''GNULIB_FMODL''@/0/g' \ -e 's/@''GNULIB_FREXPF''@/0/g' \ -e 's/@''GNULIB_FREXP''@/1/g' \ -e 's/@''GNULIB_FREXPL''@/1/g' \ -e 's/@''GNULIB_HYPOT''@/0/g' \ -e 's/@''GNULIB_HYPOTF''@/0/g' \ -e 's/@''GNULIB_HYPOTL''@/0/g' \ < ./lib/math.in.h | \ sed -e 's/@''GNULIB_ILOGB''@/0/g' \ -e 's/@''GNULIB_ILOGBF''@/0/g' \ -e 's/@''GNULIB_ILOGBL''@/0/g' \ -e 's/@''GNULIB_ISFINITE''@/0/g' \ -e 's/@''GNULIB_ISINF''@/0/g' \ -e 's/@''GNULIB_ISNAN''@/1/g' \ -e 's/@''GNULIB_ISNANF''@/1/g' \ -e 's/@''GNULIB_ISNAND''@/1/g' \ -e 's/@''GNULIB_ISNANL''@/1/g' \ -e 's/@''GNULIB_LDEXPF''@/0/g' \ -e 's/@''GNULIB_LDEXPL''@/1/g' \ -e 's/@''GNULIB_LOG''@/0/g' \ -e 's/@''GNULIB_LOGF''@/0/g' \ -e 's/@''GNULIB_LOGL''@/0/g' \ -e 's/@''GNULIB_LOG10''@/0/g' \ -e 's/@''GNULIB_LOG10F''@/0/g' \ -e 's/@''GNULIB_LOG10L''@/0/g' \ -e 's/@''GNULIB_LOG1P''@/0/g' \ -e 's/@''GNULIB_LOG1PF''@/0/g' \ -e 's/@''GNULIB_LOG1PL''@/0/g' \ -e 's/@''GNULIB_LOG2''@/0/g' \ -e 's/@''GNULIB_LOG2F''@/0/g' \ -e 's/@''GNULIB_LOG2L''@/0/g' \ -e 's/@''GNULIB_LOGB''@/0/g' \ -e 's/@''GNULIB_LOGBF''@/0/g' \ -e 's/@''GNULIB_LOGBL''@/0/g' \ -e 's/@''GNULIB_MODF''@/0/g' \ -e 's/@''GNULIB_MODFF''@/0/g' \ -e 's/@''GNULIB_MODFL''@/0/g' \ -e 's/@''GNULIB_POWF''@/0/g' \ -e 's/@''GNULIB_REMAINDER''@/0/g' \ -e 's/@''GNULIB_REMAINDERF''@/0/g' \ -e 's/@''GNULIB_REMAINDERL''@/0/g' \ -e 's/@''GNULIB_RINT''@/0/g' \ -e 's/@''GNULIB_RINTF''@/0/g' \ -e 's/@''GNULIB_RINTL''@/0/g' \ -e 's/@''GNULIB_ROUND''@/0/g' \ -e 's/@''GNULIB_ROUNDF''@/0/g' \ -e 's/@''GNULIB_ROUNDL''@/0/g' \ -e 's/@''GNULIB_SIGNBIT''@/1/g' \ -e 's/@''GNULIB_SINF''@/0/g' \ -e 's/@''GNULIB_SINL''@/0/g' \ -e 's/@''GNULIB_SINHF''@/0/g' \ -e 's/@''GNULIB_SQRTF''@/0/g' \ -e 's/@''GNULIB_SQRTL''@/0/g' \ -e 's/@''GNULIB_TANF''@/0/g' \ -e 's/@''GNULIB_TANL''@/0/g' \ -e 's/@''GNULIB_TANHF''@/0/g' \ -e 's/@''GNULIB_TRUNC''@/0/g' \ -e 's/@''GNULIB_TRUNCF''@/0/g' \ -e 's/@''GNULIB_TRUNCL''@/0/g' \ -e 's/@''GNULIB_MDA_J0''@/1/g' \ -e 's/@''GNULIB_MDA_J1''@/1/g' \ -e 's/@''GNULIB_MDA_JN''@/1/g' \ -e 's/@''GNULIB_MDA_Y0''@/1/g' \ -e 's/@''GNULIB_MDA_Y1''@/1/g' \ -e 's/@''GNULIB_MDA_YN''@/1/g' \ | \ sed -e 's|@''HAVE_ACOSF''@|1|g' \ -e 's|@''HAVE_ACOSL''@|1|g' \ -e 's|@''HAVE_ASINF''@|1|g' \ -e 's|@''HAVE_ASINL''@|1|g' \ -e 's|@''HAVE_ATANF''@|1|g' \ -e 's|@''HAVE_ATANL''@|1|g' \ -e 's|@''HAVE_ATAN2F''@|1|g' \ -e 's|@''HAVE_CBRT''@|1|g' \ -e 's|@''HAVE_CBRTF''@|1|g' \ -e 's|@''HAVE_CBRTL''@|1|g' \ -e 's|@''HAVE_COPYSIGN''@|1|g' \ -e 's|@''HAVE_COPYSIGNL''@|1|g' \ -e 's|@''HAVE_COSF''@|1|g' \ -e 's|@''HAVE_COSL''@|1|g' \ -e 's|@''HAVE_COSHF''@|1|g' \ -e 's|@''HAVE_EXPF''@|1|g' \ -e 's|@''HAVE_EXPL''@|1|g' \ -e 's|@''HAVE_EXPM1''@|1|g' \ -e 's|@''HAVE_EXPM1F''@|1|g' \ -e 's|@''HAVE_FABSF''@|1|g' \ -e 's|@''HAVE_FABSL''@|1|g' \ -e 's|@''HAVE_FMA''@|1|g' \ -e 's|@''HAVE_FMAF''@|1|g' \ -e 's|@''HAVE_FMAL''@|1|g' \ -e 's|@''HAVE_FMODF''@|1|g' \ -e 's|@''HAVE_FMODL''@|1|g' \ -e 's|@''HAVE_FREXPF''@|1|g' \ -e 's|@''HAVE_HYPOTF''@|1|g' \ -e 's|@''HAVE_HYPOTL''@|1|g' \ -e 's|@''HAVE_ILOGB''@|1|g' \ -e 's|@''HAVE_ILOGBF''@|1|g' \ -e 's|@''HAVE_ILOGBL''@|1|g' \ -e 's|@''HAVE_ISNANF''@|1|g' \ -e 's|@''HAVE_ISNAND''@|1|g' \ -e 's|@''HAVE_ISNANL''@|1|g' \ -e 's|@''HAVE_LDEXPF''@|1|g' \ -e 's|@''HAVE_LOGF''@|1|g' \ -e 's|@''HAVE_LOGL''@|1|g' \ -e 's|@''HAVE_LOG10F''@|1|g' \ -e 's|@''HAVE_LOG10L''@|1|g' \ -e 's|@''HAVE_LOG1P''@|1|g' \ -e 's|@''HAVE_LOG1PF''@|1|g' \ -e 's|@''HAVE_LOG1PL''@|1|g' \ -e 's|@''HAVE_LOGBF''@|1|g' \ -e 's|@''HAVE_LOGBL''@|1|g' \ -e 's|@''HAVE_MODFF''@|1|g' \ -e 's|@''HAVE_MODFL''@|1|g' \ -e 's|@''HAVE_POWF''@|1|g' \ -e 's|@''HAVE_REMAINDER''@|1|g' \ -e 's|@''HAVE_REMAINDERF''@|1|g' \ -e 's|@''HAVE_RINT''@|1|g' \ -e 's|@''HAVE_RINTL''@|1|g' \ -e 's|@''HAVE_SINF''@|1|g' \ -e 's|@''HAVE_SINL''@|1|g' \ -e 's|@''HAVE_SINHF''@|1|g' \ -e 's|@''HAVE_SQRTF''@|1|g' \ -e 's|@''HAVE_SQRTL''@|1|g' \ -e 's|@''HAVE_TANF''@|1|g' \ -e 's|@''HAVE_TANL''@|1|g' \ -e 's|@''HAVE_TANHF''@|1|g' \ -e 's|@''HAVE_DECL_ACOSL''@|1|g' \ -e 's|@''HAVE_DECL_ASINL''@|1|g' \ -e 's|@''HAVE_DECL_ATANL''@|1|g' \ -e 's|@''HAVE_DECL_CBRTF''@|1|g' \ -e 's|@''HAVE_DECL_CBRTL''@|1|g' \ -e 's|@''HAVE_DECL_CEILF''@|1|g' \ -e 's|@''HAVE_DECL_CEILL''@|1|g' \ -e 's|@''HAVE_DECL_COPYSIGNF''@|1|g' \ -e 's|@''HAVE_DECL_COSL''@|1|g' \ -e 's|@''HAVE_DECL_EXPL''@|1|g' \ -e 's|@''HAVE_DECL_EXP2''@|1|g' \ -e 's|@''HAVE_DECL_EXP2F''@|1|g' \ -e 's|@''HAVE_DECL_EXP2L''@|1|g' \ -e 's|@''HAVE_DECL_EXPM1L''@|1|g' \ -e 's|@''HAVE_DECL_FLOORF''@|1|g' \ -e 's|@''HAVE_DECL_FLOORL''@|1|g' \ -e 's|@''HAVE_DECL_FREXPL''@|1|g' \ -e 's|@''HAVE_DECL_LDEXPL''@|1|g' \ -e 's|@''HAVE_DECL_LOGL''@|1|g' \ -e 's|@''HAVE_DECL_LOG10L''@|1|g' \ -e 's|@''HAVE_DECL_LOG2''@|1|g' \ -e 's|@''HAVE_DECL_LOG2F''@|1|g' \ -e 's|@''HAVE_DECL_LOG2L''@|1|g' \ -e 's|@''HAVE_DECL_LOGB''@|1|g' \ -e 's|@''HAVE_DECL_REMAINDER''@|1|g' \ -e 's|@''HAVE_DECL_REMAINDERL''@|1|g' \ -e 's|@''HAVE_DECL_RINTF''@|1|g' \ -e 's|@''HAVE_DECL_ROUND''@|1|g' \ -e 's|@''HAVE_DECL_ROUNDF''@|1|g' \ -e 's|@''HAVE_DECL_ROUNDL''@|1|g' \ -e 's|@''HAVE_DECL_SINL''@|1|g' \ -e 's|@''HAVE_DECL_SQRTL''@|1|g' \ -e 's|@''HAVE_DECL_TANL''@|1|g' \ -e 's|@''HAVE_DECL_TRUNC''@|1|g' \ -e 's|@''HAVE_DECL_TRUNCF''@|1|g' \ -e 's|@''HAVE_DECL_TRUNCL''@|1|g' \ | \ sed -e 's|@''REPLACE_ACOSF''@|0|g' \ -e 's|@''REPLACE_ASINF''@|0|g' \ -e 's|@''REPLACE_ATANF''@|0|g' \ -e 's|@''REPLACE_ATAN2F''@|0|g' \ -e 's|@''REPLACE_CBRTF''@|0|g' \ -e 's|@''REPLACE_CBRTL''@|0|g' \ -e 's|@''REPLACE_CEIL''@|0|g' \ -e 's|@''REPLACE_CEILF''@|0|g' \ -e 's|@''REPLACE_CEILL''@|0|g' \ -e 's|@''REPLACE_COSF''@|0|g' \ -e 's|@''REPLACE_COSHF''@|0|g' \ -e 's|@''REPLACE_EXPF''@|0|g' \ -e 's|@''REPLACE_EXPL''@|0|g' \ -e 's|@''REPLACE_EXPM1''@|0|g' \ -e 's|@''REPLACE_EXPM1F''@|0|g' \ -e 's|@''REPLACE_EXPM1L''@|0|g' \ -e 's|@''REPLACE_EXP2''@|0|g' \ -e 's|@''REPLACE_EXP2L''@|0|g' \ -e 's|@''REPLACE_FABSL''@|0|g' \ -e 's|@''REPLACE_FLOOR''@|0|g' \ -e 's|@''REPLACE_FLOORF''@|0|g' \ -e 's|@''REPLACE_FLOORL''@|0|g' \ -e 's|@''REPLACE_FMA''@|0|g' \ -e 's|@''REPLACE_FMAF''@|0|g' \ -e 's|@''REPLACE_FMAL''@|0|g' \ -e 's|@''REPLACE_FMOD''@|0|g' \ -e 's|@''REPLACE_FMODF''@|0|g' \ -e 's|@''REPLACE_FMODL''@|0|g' \ -e 's|@''REPLACE_FREXPF''@|0|g' \ -e 's|@''REPLACE_FREXP''@|0|g' \ -e 's|@''REPLACE_FREXPL''@|0|g' \ -e 's|@''REPLACE_HUGE_VAL''@|0|g' \ -e 's|@''REPLACE_HYPOT''@|0|g' \ -e 's|@''REPLACE_HYPOTF''@|0|g' \ -e 's|@''REPLACE_HYPOTL''@|0|g' \ -e 's|@''REPLACE_ILOGB''@|0|g' \ -e 's|@''REPLACE_ILOGBF''@|0|g' \ -e 's|@''REPLACE_ILOGBL''@|0|g' \ -e 's|@''REPLACE_ISFINITE''@|0|g' \ -e 's|@''REPLACE_ISINF''@|0|g' \ -e 's|@''REPLACE_ISNAN''@|0|g' \ -e 's|@''REPLACE_ITOLD''@|0|g' \ -e 's|@''REPLACE_LDEXPL''@|0|g' \ -e 's|@''REPLACE_LOG''@|0|g' \ -e 's|@''REPLACE_LOGF''@|0|g' \ -e 's|@''REPLACE_LOGL''@|0|g' \ -e 's|@''REPLACE_LOG10''@|0|g' \ -e 's|@''REPLACE_LOG10F''@|0|g' \ -e 's|@''REPLACE_LOG10L''@|0|g' \ -e 's|@''REPLACE_LOG1P''@|0|g' \ -e 's|@''REPLACE_LOG1PF''@|0|g' \ -e 's|@''REPLACE_LOG1PL''@|0|g' \ -e 's|@''REPLACE_LOG2''@|0|g' \ -e 's|@''REPLACE_LOG2F''@|0|g' \ -e 's|@''REPLACE_LOG2L''@|0|g' \ -e 's|@''REPLACE_LOGB''@|0|g' \ -e 's|@''REPLACE_LOGBF''@|0|g' \ -e 's|@''REPLACE_LOGBL''@|0|g' \ -e 's|@''REPLACE_MODF''@|0|g' \ -e 's|@''REPLACE_MODFF''@|0|g' \ -e 's|@''REPLACE_MODFL''@|0|g' \ -e 's|@''REPLACE_NAN''@|0|g' \ -e 's|@''REPLACE_REMAINDER''@|0|g' \ -e 's|@''REPLACE_REMAINDERF''@|0|g' \ -e 's|@''REPLACE_REMAINDERL''@|0|g' \ -e 's|@''REPLACE_RINTL''@|0|g' \ -e 's|@''REPLACE_ROUND''@|0|g' \ -e 's|@''REPLACE_ROUNDF''@|0|g' \ -e 's|@''REPLACE_ROUNDL''@|0|g' \ -e 's|@''REPLACE_SIGNBIT''@|0|g' \ -e 's|@''REPLACE_SIGNBIT_USING_BUILTINS''@|1|g' \ -e 's|@''REPLACE_SINF''@|0|g' \ -e 's|@''REPLACE_SINHF''@|0|g' \ -e 's|@''REPLACE_SQRTF''@|0|g' \ -e 's|@''REPLACE_SQRTL''@|0|g' \ -e 's|@''REPLACE_TANF''@|0|g' \ -e 's|@''REPLACE_TANHF''@|0|g' \ -e 's|@''REPLACE_TRUNC''@|0|g' \ -e 's|@''REPLACE_TRUNCF''@|0|g' \ -e 's|@''REPLACE_TRUNCL''@|0|g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_ARG_NONNULL/r ./lib/arg-nonnull.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h'; \ } > lib/math.h-t && \ mv lib/math.h-t lib/math.h rm -f lib/sched.h-t lib/sched.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''HAVE_SCHED_H''@|1|g' \ -e 's|@''HAVE_SYS_CDEFS_H''@|1|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_SCHED_H''@||g' \ -e 's|@''HAVE_STRUCT_SCHED_PARAM''@|1|g' \ -e 's/@''GNULIB_SCHED_YIELD''@/0/g' \ -e 's|@''HAVE_SCHED_YIELD''@|1|g' \ -e 's|@''REPLACE_SCHED_YIELD''@|0|g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h' \ < ./lib/sched.in.h; \ } > lib/sched.h-t && \ mv lib/sched.h-t lib/sched.h /bin/mkdir -p lib/malloc rm -f lib/malloc/scratch_buffer.gl.h-t lib/malloc/scratch_buffer.gl.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ sed -e 's|__always_inline|inline _GL_ATTRIBUTE_ALWAYS_INLINE|g' \ -e 's|__glibc_likely|_GL_LIKELY|g' \ -e 's|__glibc_unlikely|_GL_UNLIKELY|g' \ -e '/libc_hidden_proto/d' \ < ./lib/malloc/scratch_buffer.h; \ } > lib/malloc/scratch_buffer.gl.h-t && \ mv lib/malloc/scratch_buffer.gl.h-t lib/malloc/scratch_buffer.gl.h rm -f lib/signal.h-t lib/signal.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */' && \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_SIGNAL_H''@||g' \ -e 's/@''GNULIB_PTHREAD_SIGMASK''@/0/g' \ -e 's/@''GNULIB_RAISE''@/1/g' \ -e 's/@''GNULIB_SIGNAL_H_SIGPIPE''@/0/g' \ -e 's/@''GNULIB_SIGPROCMASK''@/1/g' \ -e 's/@''GNULIB_SIGACTION''@/1/g' \ -e 's|@''HAVE_POSIX_SIGNALBLOCKING''@|1|g' \ -e 's|@''HAVE_PTHREAD_SIGMASK''@|1|g' \ -e 's|@''HAVE_RAISE''@|1|g' \ -e 's|@''HAVE_SIGSET_T''@|1|g' \ -e 's|@''HAVE_SIGINFO_T''@|1|g' \ -e 's|@''HAVE_SIGACTION''@|1|g' \ -e 's|@''HAVE_STRUCT_SIGACTION_SA_SIGACTION''@|1|g' \ -e 's|@''HAVE_TYPE_VOLATILE_SIG_ATOMIC_T''@|1|g' \ -e 's|@''HAVE_SIGHANDLER_T''@|1|g' \ -e 's|@''REPLACE_PTHREAD_SIGMASK''@|0|g' \ -e 's|@''REPLACE_RAISE''@|0|g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_ARG_NONNULL/r ./lib/arg-nonnull.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h' \ < ./lib/signal.in.h; \ } > lib/signal.h-t && \ mv lib/signal.h-t lib/signal.h rm -f lib/spawn.h-t lib/spawn.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''HAVE_SPAWN_H''@|1|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_SPAWN_H''@||g' \ -e 's/@''GNULIB_POSIX_SPAWN''@/1/g' \ -e 's/@''GNULIB_POSIX_SPAWNP''@/1/g' \ -e 's/@''GNULIB_POSIX_SPAWN_FILE_ACTIONS_INIT''@/1/g' \ -e 's/@''GNULIB_POSIX_SPAWN_FILE_ACTIONS_ADDCHDIR''@/1/g' \ -e 's/@''GNULIB_POSIX_SPAWN_FILE_ACTIONS_ADDCLOSE''@/1/g' \ -e 's/@''GNULIB_POSIX_SPAWN_FILE_ACTIONS_ADDDUP2''@/1/g' \ -e 's/@''GNULIB_POSIX_SPAWN_FILE_ACTIONS_ADDFCHDIR''@/0/g' \ -e 's/@''GNULIB_POSIX_SPAWN_FILE_ACTIONS_ADDOPEN''@/1/g' \ -e 's/@''GNULIB_POSIX_SPAWN_FILE_ACTIONS_DESTROY''@/1/g' \ -e 's/@''GNULIB_POSIX_SPAWNATTR_INIT''@/1/g' \ -e 's/@''GNULIB_POSIX_SPAWNATTR_GETFLAGS''@/0/g' \ -e 's/@''GNULIB_POSIX_SPAWNATTR_SETFLAGS''@/1/g' \ -e 's/@''GNULIB_POSIX_SPAWNATTR_GETPGROUP''@/0/g' \ -e 's/@''GNULIB_POSIX_SPAWNATTR_SETPGROUP''@/1/g' \ -e 's/@''GNULIB_POSIX_SPAWNATTR_GETSCHEDPARAM''@/0/g' \ -e 's/@''GNULIB_POSIX_SPAWNATTR_SETSCHEDPARAM''@/0/g' \ -e 's/@''GNULIB_POSIX_SPAWNATTR_GETSCHEDPOLICY''@/0/g' \ -e 's/@''GNULIB_POSIX_SPAWNATTR_SETSCHEDPOLICY''@/0/g' \ -e 's/@''GNULIB_POSIX_SPAWNATTR_GETSIGDEFAULT''@/0/g' \ -e 's/@''GNULIB_POSIX_SPAWNATTR_SETSIGDEFAULT''@/0/g' \ -e 's/@''GNULIB_POSIX_SPAWNATTR_GETSIGMASK''@/0/g' \ -e 's/@''GNULIB_POSIX_SPAWNATTR_SETSIGMASK''@/1/g' \ -e 's/@''GNULIB_POSIX_SPAWNATTR_DESTROY''@/1/g' \ -e 's|@''HAVE_POSIX_SPAWN''@|1|g' \ -e 's|@''HAVE_POSIX_SPAWNATTR_T''@|1|g' \ -e 's|@''HAVE_POSIX_SPAWN_FILE_ACTIONS_T''@|1|g' \ -e 's|@''HAVE_POSIX_SPAWN_FILE_ACTIONS_ADDCHDIR''@|0|g' \ -e 's|@''HAVE_POSIX_SPAWN_FILE_ACTIONS_ADDFCHDIR''@|1|g' \ -e 's|@''REPLACE_POSIX_SPAWN''@|0|g' \ -e 's|@''REPLACE_POSIX_SPAWN_FILE_ACTIONS_ADDCHDIR''@|0|g' \ -e 's|@''REPLACE_POSIX_SPAWN_FILE_ACTIONS_ADDCLOSE''@|0|g' \ -e 's|@''REPLACE_POSIX_SPAWN_FILE_ACTIONS_ADDDUP2''@|0|g' \ -e 's|@''REPLACE_POSIX_SPAWN_FILE_ACTIONS_ADDFCHDIR''@|0|g' \ -e 's|@''REPLACE_POSIX_SPAWN_FILE_ACTIONS_ADDOPEN''@|0|g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_ARG_NONNULL/r ./lib/arg-nonnull.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h' \ < ./lib/spawn.in.h; \ } > lib/spawn.h-t && \ mv lib/spawn.h-t lib/spawn.h rm -f lib/stdio.h-t lib/stdio.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */' && \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_STDIO_H''@||g' \ -e 's/@''GNULIB_DPRINTF''@/0/g' \ -e 's/@''GNULIB_FCLOSE''@/0/g' \ -e 's/@''GNULIB_FDOPEN''@/0/g' \ -e 's/@''GNULIB_FFLUSH''@/0/g' \ -e 's/@''GNULIB_FGETC''@/1/g' \ -e 's/@''GNULIB_FGETS''@/1/g' \ -e 's/@''GNULIB_FOPEN''@/1/g' \ -e 's/@''GNULIB_FPRINTF''@/1/g' \ -e 's/@''GNULIB_FPRINTF_POSIX''@/1/g' \ -e 's/@''GNULIB_FPURGE''@/0/g' \ -e 's/@''GNULIB_FPUTC''@/1/g' \ -e 's/@''GNULIB_FPUTS''@/1/g' \ -e 's/@''GNULIB_FREAD''@/1/g' \ -e 's/@''GNULIB_FREOPEN''@/0/g' \ -e 's/@''GNULIB_FSCANF''@/1/g' \ -e 's/@''GNULIB_FSEEK''@/0/g' \ -e 's/@''GNULIB_FSEEKO''@/0/g' \ -e 's/@''GNULIB_FTELL''@/0/g' \ -e 's/@''GNULIB_FTELLO''@/0/g' \ -e 's/@''GNULIB_FWRITE''@/1/g' \ -e 's/@''GNULIB_GETC''@/1/g' \ -e 's/@''GNULIB_GETCHAR''@/1/g' \ -e 's/@''GNULIB_GETDELIM''@/0/g' \ -e 's/@''GNULIB_GETLINE''@/1/g' \ -e 's/@''GNULIB_OBSTACK_PRINTF''@/1/g' \ -e 's/@''GNULIB_OBSTACK_PRINTF_POSIX''@/0/g' \ -e 's/@''GNULIB_PCLOSE''@/0/g' \ -e 's/@''GNULIB_PERROR''@/1/g' \ -e 's/@''GNULIB_POPEN''@/0/g' \ -e 's/@''GNULIB_PRINTF''@/1/g' \ -e 's/@''GNULIB_PRINTF_POSIX''@/1/g' \ -e 's/@''GNULIB_PUTC''@/1/g' \ -e 's/@''GNULIB_PUTCHAR''@/1/g' \ -e 's/@''GNULIB_PUTS''@/1/g' \ -e 's/@''GNULIB_REMOVE''@/0/g' \ -e 's/@''GNULIB_RENAME''@/1/g' \ -e 's/@''GNULIB_RENAMEAT''@/0/g' \ -e 's/@''GNULIB_SCANF''@/1/g' \ -e 's/@''GNULIB_SNPRINTF''@/1/g' \ -e 's/@''GNULIB_SPRINTF_POSIX''@/1/g' \ -e 's/@''GNULIB_STDIO_H_NONBLOCKING''@/0/g' \ -e 's/@''GNULIB_STDIO_H_SIGPIPE''@/0/g' \ -e 's/@''GNULIB_TMPFILE''@/0/g' \ -e 's/@''GNULIB_VASPRINTF''@/1/g' \ -e 's/@''GNULIB_VDPRINTF''@/0/g' \ -e 's/@''GNULIB_VFPRINTF''@/1/g' \ -e 's/@''GNULIB_VFPRINTF_POSIX''@/1/g' \ -e 's/@''GNULIB_VFSCANF''@/0/g' \ -e 's/@''GNULIB_VSCANF''@/0/g' \ -e 's/@''GNULIB_VPRINTF''@/1/g' \ -e 's/@''GNULIB_VPRINTF_POSIX''@/0/g' \ -e 's/@''GNULIB_VSNPRINTF''@/1/g' \ -e 's/@''GNULIB_VSPRINTF_POSIX''@/1/g' \ -e 's/@''GNULIB_MDA_FCLOSEALL''@/1/g' \ -e 's/@''GNULIB_MDA_FDOPEN''@/1/g' \ -e 's/@''GNULIB_MDA_FILENO''@/1/g' \ -e 's/@''GNULIB_MDA_GETW''@/1/g' \ -e 's/@''GNULIB_MDA_PUTW''@/1/g' \ -e 's/@''GNULIB_MDA_TEMPNAM''@/1/g' \ < ./lib/stdio.in.h | \ sed -e 's|@''HAVE_DECL_FCLOSEALL''@|1|g' \ -e 's|@''HAVE_DECL_FPURGE''@|1|g' \ -e 's|@''HAVE_DECL_FSEEKO''@|1|g' \ -e 's|@''HAVE_DECL_FTELLO''@|1|g' \ -e 's|@''HAVE_DECL_GETDELIM''@|1|g' \ -e 's|@''HAVE_DECL_GETLINE''@|1|g' \ -e 's|@''HAVE_DECL_OBSTACK_PRINTF''@|1|g' \ -e 's|@''HAVE_DECL_SNPRINTF''@|1|g' \ -e 's|@''HAVE_DECL_VSNPRINTF''@|1|g' \ -e 's|@''HAVE_DPRINTF''@|1|g' \ -e 's|@''HAVE_FSEEKO''@|1|g' \ -e 's|@''HAVE_FTELLO''@|1|g' \ -e 's|@''HAVE_PCLOSE''@|1|g' \ -e 's|@''HAVE_POPEN''@|1|g' \ -e 's|@''HAVE_RENAMEAT''@|1|g' \ -e 's|@''HAVE_VASPRINTF''@|1|g' \ -e 's|@''HAVE_VDPRINTF''@|1|g' \ -e 's|@''REPLACE_DPRINTF''@|0|g' \ -e 's|@''REPLACE_FCLOSE''@|0|g' \ -e 's|@''REPLACE_FDOPEN''@|0|g' \ -e 's|@''REPLACE_FFLUSH''@|0|g' \ -e 's|@''REPLACE_FOPEN''@|0|g' \ -e 's|@''REPLACE_FPRINTF''@|1|g' \ -e 's|@''REPLACE_FPURGE''@|0|g' \ -e 's|@''REPLACE_FREOPEN''@|0|g' \ -e 's|@''REPLACE_FSEEK''@|0|g' \ -e 's|@''REPLACE_FSEEKO''@|0|g' \ -e 's|@''REPLACE_FTELL''@|0|g' \ -e 's|@''REPLACE_FTELLO''@|0|g' \ -e 's|@''REPLACE_GETDELIM''@|0|g' \ -e 's|@''REPLACE_GETLINE''@|0|g' \ -e 's|@''REPLACE_OBSTACK_PRINTF''@|0|g' \ -e 's|@''REPLACE_PERROR''@|0|g' \ -e 's|@''REPLACE_POPEN''@|0|g' \ -e 's|@''REPLACE_PRINTF''@|1|g' \ -e 's|@''REPLACE_REMOVE''@|0|g' \ -e 's|@''REPLACE_RENAME''@|0|g' \ -e 's|@''REPLACE_RENAMEAT''@|0|g' \ -e 's|@''REPLACE_SNPRINTF''@|1|g' \ -e 's|@''REPLACE_SPRINTF''@|1|g' \ -e 's|@''REPLACE_STDIO_READ_FUNCS''@|0|g' \ -e 's|@''REPLACE_STDIO_WRITE_FUNCS''@|0|g' \ -e 's|@''REPLACE_TMPFILE''@|0|g' \ -e 's|@''REPLACE_VASPRINTF''@|1|g' \ -e 's|@''REPLACE_VDPRINTF''@|0|g' \ -e 's|@''REPLACE_VFPRINTF''@|1|g' \ -e 's|@''REPLACE_VPRINTF''@|0|g' \ -e 's|@''REPLACE_VSNPRINTF''@|1|g' \ -e 's|@''REPLACE_VSPRINTF''@|1|g' \ -e 's|@''ASM_SYMBOL_PREFIX''@|""|g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_ARG_NONNULL/r ./lib/arg-nonnull.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h'; \ } > lib/stdio.h-t && \ mv lib/stdio.h-t lib/stdio.h rm -f lib/stdlib.h-t lib/stdlib.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */' && \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_STDLIB_H''@||g' \ -e 's/@''GNULIB__EXIT''@/0/g' \ -e 's/@''GNULIB_ALIGNED_ALLOC''@/0/g' \ -e 's/@''GNULIB_ATOLL''@/0/g' \ -e 's/@''GNULIB_CALLOC_POSIX''@/1/g' \ -e 's/@''GNULIB_CANONICALIZE_FILE_NAME''@/1/g' \ -e 's/@''GNULIB_FREE_POSIX''@/1/g' \ -e 's/@''GNULIB_GETLOADAVG''@/0/g' \ -e 's/@''GNULIB_GETSUBOPT''@/0/g' \ -e 's/@''GNULIB_GRANTPT''@/0/g' \ -e 's/@''GNULIB_MALLOC_POSIX''@/1/g' \ -e 's/@''GNULIB_MBTOWC''@/0/g' \ -e 's/@''GNULIB_MKDTEMP''@/0/g' \ -e 's/@''GNULIB_MKOSTEMP''@/0/g' \ -e 's/@''GNULIB_MKOSTEMPS''@/0/g' \ -e 's/@''GNULIB_MKSTEMP''@/0/g' \ -e 's/@''GNULIB_MKSTEMPS''@/0/g' \ -e 's/@''GNULIB_POSIX_MEMALIGN''@/0/g' \ -e 's/@''GNULIB_POSIX_OPENPT''@/0/g' \ -e 's/@''GNULIB_PTSNAME''@/0/g' \ -e 's/@''GNULIB_PTSNAME_R''@/0/g' \ -e 's/@''GNULIB_PUTENV''@/0/g' \ -e 's/@''GNULIB_QSORT_R''@/0/g' \ -e 's/@''GNULIB_RANDOM''@/0/g' \ -e 's/@''GNULIB_RANDOM_R''@/0/g' \ -e 's/@''GNULIB_REALLOC_POSIX''@/1/g' \ -e 's/@''GNULIB_REALLOCARRAY''@/1/g' \ -e 's/@''GNULIB_REALPATH''@/1/g' \ -e 's/@''GNULIB_RPMATCH''@/0/g' \ -e 's/@''GNULIB_SECURE_GETENV''@/0/g' \ -e 's/@''GNULIB_SETENV''@/0/g' \ -e 's/@''GNULIB_STRTOD''@/1/g' \ -e 's/@''GNULIB_STRTOL''@/0/g' \ -e 's/@''GNULIB_STRTOLD''@/0/g' \ -e 's/@''GNULIB_STRTOLL''@/0/g' \ -e 's/@''GNULIB_STRTOUL''@/0/g' \ -e 's/@''GNULIB_STRTOULL''@/0/g' \ -e 's/@''GNULIB_SYSTEM_POSIX''@/0/g' \ -e 's/@''GNULIB_UNLOCKPT''@/0/g' \ -e 's/@''GNULIB_UNSETENV''@/1/g' \ -e 's/@''GNULIB_WCTOMB''@/0/g' \ -e 's/@''GNULIB_MDA_ECVT''@/1/g' \ -e 's/@''GNULIB_MDA_FCVT''@/1/g' \ -e 's/@''GNULIB_MDA_GCVT''@/1/g' \ -e 's/@''GNULIB_MDA_MKTEMP''@/1/g' \ -e 's/@''GNULIB_MDA_PUTENV''@/1/g' \ < ./lib/stdlib.in.h | \ sed -e 's|@''HAVE__EXIT''@|1|g' \ -e 's|@''HAVE_ALIGNED_ALLOC''@|1|g' \ -e 's|@''HAVE_ATOLL''@|1|g' \ -e 's|@''HAVE_CANONICALIZE_FILE_NAME''@|1|g' \ -e 's|@''HAVE_DECL_ECVT''@|1|g' \ -e 's|@''HAVE_DECL_FCVT''@|1|g' \ -e 's|@''HAVE_DECL_GCVT''@|1|g' \ -e 's|@''HAVE_DECL_GETLOADAVG''@|1|g' \ -e 's|@''HAVE_GETSUBOPT''@|1|g' \ -e 's|@''HAVE_GRANTPT''@|1|g' \ -e 's|@''HAVE_INITSTATE''@|1|g' \ -e 's|@''HAVE_DECL_INITSTATE''@|1|g' \ -e 's|@''HAVE_MBTOWC''@|1|g' \ -e 's|@''HAVE_MKDTEMP''@|1|g' \ -e 's|@''HAVE_MKOSTEMP''@|1|g' \ -e 's|@''HAVE_MKOSTEMPS''@|1|g' \ -e 's|@''HAVE_MKSTEMP''@|1|g' \ -e 's|@''HAVE_MKSTEMPS''@|1|g' \ -e 's|@''HAVE_POSIX_MEMALIGN''@|1|g' \ -e 's|@''HAVE_POSIX_OPENPT''@|1|g' \ -e 's|@''HAVE_PTSNAME''@|1|g' \ -e 's|@''HAVE_PTSNAME_R''@|1|g' \ -e 's|@''HAVE_QSORT_R''@|1|g' \ -e 's|@''HAVE_RANDOM''@|1|g' \ -e 's|@''HAVE_RANDOM_H''@|1|g' \ -e 's|@''HAVE_RANDOM_R''@|1|g' \ -e 's|@''HAVE_REALLOCARRAY''@|1|g' \ -e 's|@''HAVE_REALPATH''@|1|g' \ -e 's|@''HAVE_RPMATCH''@|1|g' \ -e 's|@''HAVE_SECURE_GETENV''@|1|g' \ -e 's|@''HAVE_DECL_SETENV''@|1|g' \ -e 's|@''HAVE_SETSTATE''@|1|g' \ -e 's|@''HAVE_DECL_SETSTATE''@|1|g' \ -e 's|@''HAVE_STRTOD''@|1|g' \ -e 's|@''HAVE_STRTOL''@|1|g' \ -e 's|@''HAVE_STRTOLD''@|1|g' \ -e 's|@''HAVE_STRTOLL''@|1|g' \ -e 's|@''HAVE_STRTOUL''@|1|g' \ -e 's|@''HAVE_STRTOULL''@|1|g' \ -e 's|@''HAVE_STRUCT_RANDOM_DATA''@|1|g' \ -e 's|@''HAVE_SYS_LOADAVG_H''@|0|g' \ -e 's|@''HAVE_UNLOCKPT''@|1|g' \ -e 's|@''HAVE_DECL_UNSETENV''@|1|g' \ -e 's|@''REPLACE_ALIGNED_ALLOC''@|0|g' \ -e 's|@''REPLACE_CALLOC''@|0|g' \ -e 's|@''REPLACE_CANONICALIZE_FILE_NAME''@|0|g' \ -e 's|@''REPLACE_FREE''@|0|g' \ -e 's|@''REPLACE_INITSTATE''@|0|g' \ -e 's|@''REPLACE_MALLOC''@|0|g' \ -e 's|@''REPLACE_MBTOWC''@|0|g' \ -e 's|@''REPLACE_MKSTEMP''@|0|g' \ -e 's|@''REPLACE_POSIX_MEMALIGN''@|0|g' \ -e 's|@''REPLACE_PTSNAME''@|0|g' \ -e 's|@''REPLACE_PTSNAME_R''@|0|g' \ -e 's|@''REPLACE_PUTENV''@|0|g' \ -e 's|@''REPLACE_QSORT_R''@|0|g' \ -e 's|@''REPLACE_RANDOM''@|0|g' \ -e 's|@''REPLACE_RANDOM_R''@|0|g' \ -e 's|@''REPLACE_REALLOC''@|0|g' \ -e 's|@''REPLACE_REALLOCARRAY''@|0|g' \ -e 's|@''REPLACE_REALPATH''@|0|g' \ -e 's|@''REPLACE_SETENV''@|0|g' \ -e 's|@''REPLACE_SETSTATE''@|0|g' \ -e 's|@''REPLACE_STRTOD''@|0|g' \ -e 's|@''REPLACE_STRTOL''@|0|g' \ -e 's|@''REPLACE_STRTOLD''@|0|g' \ -e 's|@''REPLACE_STRTOLL''@|0|g' \ -e 's|@''REPLACE_STRTOUL''@|0|g' \ -e 's|@''REPLACE_STRTOULL''@|0|g' \ -e 's|@''REPLACE_UNSETENV''@|0|g' \ -e 's|@''REPLACE_WCTOMB''@|0|g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _Noreturn/r ./lib/_Noreturn.h' \ -e '/definition of _GL_ARG_NONNULL/r ./lib/arg-nonnull.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h'; \ } > lib/stdlib.h-t && \ mv lib/stdlib.h-t lib/stdlib.h rm -f lib/string.h-t lib/string.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */' && \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_STRING_H''@||g' \ -e 's/@''GNULIB_EXPLICIT_BZERO''@/0/g' \ -e 's/@''GNULIB_FFSL''@/1/g' \ -e 's/@''GNULIB_FFSLL''@/0/g' \ -e 's/@''GNULIB_MBSLEN''@/0/g' \ -e 's/@''GNULIB_MBSNLEN''@/0/g' \ -e 's/@''GNULIB_MBSCHR''@/0/g' \ -e 's/@''GNULIB_MBSRCHR''@/0/g' \ -e 's/@''GNULIB_MBSSTR''@/0/g' \ -e 's/@''GNULIB_MBSCASECMP''@/0/g' \ -e 's/@''GNULIB_MBSNCASECMP''@/0/g' \ -e 's/@''GNULIB_MBSPCASECMP''@/0/g' \ -e 's/@''GNULIB_MBSCASESTR''@/0/g' \ -e 's/@''GNULIB_MBSCSPN''@/0/g' \ -e 's/@''GNULIB_MBSPBRK''@/0/g' \ -e 's/@''GNULIB_MBSSPN''@/0/g' \ -e 's/@''GNULIB_MBSSEP''@/0/g' \ -e 's/@''GNULIB_MBSTOK_R''@/0/g' \ -e 's/@''GNULIB_MEMCHR''@/1/g' \ -e 's/@''GNULIB_MEMMEM''@/0/g' \ -e 's/@''GNULIB_MEMPCPY''@/1/g' \ -e 's/@''GNULIB_MEMRCHR''@/0/g' \ -e 's/@''GNULIB_RAWMEMCHR''@/1/g' \ -e 's/@''GNULIB_STPCPY''@/1/g' \ -e 's/@''GNULIB_STPNCPY''@/1/g' \ -e 's/@''GNULIB_STRCHRNUL''@/0/g' \ -e 's/@''GNULIB_STRDUP''@/1/g' \ -e 's/@''GNULIB_STRNCAT''@/0/g' \ -e 's/@''GNULIB_STRNDUP''@/1/g' \ -e 's/@''GNULIB_STRNLEN''@/0/g' \ -e 's/@''GNULIB_STRPBRK''@/0/g' \ -e 's/@''GNULIB_STRSEP''@/0/g' \ -e 's/@''GNULIB_STRSTR''@/0/g' \ -e 's/@''GNULIB_STRCASESTR''@/0/g' \ -e 's/@''GNULIB_STRTOK_R''@/0/g' \ -e 's/@''GNULIB_STRERROR''@/1/g' \ -e 's/@''GNULIB_STRERROR_R''@/0/g' \ -e 's/@''GNULIB_STRERRORNAME_NP''@/0/g' \ -e 's/@''GNULIB_SIGABBREV_NP''@/0/g' \ -e 's/@''GNULIB_SIGDESCR_NP''@/0/g' \ -e 's/@''GNULIB_STRSIGNAL''@/0/g' \ -e 's/@''GNULIB_STRVERSCMP''@/1/g' \ -e 's/@''GNULIB_MDA_MEMCCPY''@/1/g' \ -e 's/@''GNULIB_MDA_STRDUP''@/1/g' \ < ./lib/string.in.h | \ sed -e 's|@''HAVE_EXPLICIT_BZERO''@|1|g' \ -e 's|@''HAVE_FFSL''@|1|g' \ -e 's|@''HAVE_FFSLL''@|1|g' \ -e 's|@''HAVE_MBSLEN''@|0|g' \ -e 's|@''HAVE_DECL_MEMMEM''@|1|g' \ -e 's|@''HAVE_MEMPCPY''@|1|g' \ -e 's|@''HAVE_DECL_MEMRCHR''@|1|g' \ -e 's|@''HAVE_RAWMEMCHR''@|1|g' \ -e 's|@''HAVE_STPCPY''@|1|g' \ -e 's|@''HAVE_STPNCPY''@|1|g' \ -e 's|@''HAVE_STRCHRNUL''@|1|g' \ -e 's|@''HAVE_DECL_STRDUP''@|1|g' \ -e 's|@''HAVE_DECL_STRNDUP''@|1|g' \ -e 's|@''HAVE_DECL_STRNLEN''@|1|g' \ -e 's|@''HAVE_STRPBRK''@|1|g' \ -e 's|@''HAVE_STRSEP''@|1|g' \ -e 's|@''HAVE_STRCASESTR''@|1|g' \ -e 's|@''HAVE_DECL_STRTOK_R''@|1|g' \ -e 's|@''HAVE_DECL_STRERROR_R''@|1|g' \ -e 's|@''HAVE_STRERRORNAME_NP''@|1|g' \ -e 's|@''HAVE_SIGABBREV_NP''@|1|g' \ -e 's|@''HAVE_SIGDESCR_NP''@|1|g' \ -e 's|@''HAVE_DECL_STRSIGNAL''@|1|g' \ -e 's|@''HAVE_STRVERSCMP''@|1|g' \ -e 's|@''REPLACE_FFSLL''@|0|g' \ -e 's|@''REPLACE_MEMCHR''@|0|g' \ -e 's|@''REPLACE_MEMMEM''@|0|g' \ -e 's|@''REPLACE_FREE''@|0|g' \ -e 's|@''REPLACE_STPNCPY''@|0|g' \ -e 's|@''REPLACE_STRCHRNUL''@|0|g' \ -e 's|@''REPLACE_STRDUP''@|0|g' \ -e 's|@''REPLACE_STRNCAT''@|0|g' \ -e 's|@''REPLACE_STRNDUP''@|0|g' \ -e 's|@''REPLACE_STRNLEN''@|0|g' \ -e 's|@''REPLACE_STRSTR''@|0|g' \ -e 's|@''REPLACE_STRCASESTR''@|0|g' \ -e 's|@''REPLACE_STRTOK_R''@|0|g' \ -e 's|@''REPLACE_STRERROR''@|0|g' \ -e 's|@''REPLACE_STRERROR_R''@|1|g' \ -e 's|@''REPLACE_STRERRORNAME_NP''@|0|g' \ -e 's|@''REPLACE_STRSIGNAL''@|0|g' \ -e 's|@''UNDEFINE_STRTOK_R''@|0|g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_ARG_NONNULL/r ./lib/arg-nonnull.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h'; \ < ./lib/string.in.h; \ } > lib/string.h-t && \ mv lib/string.h-t lib/string.h /bin/mkdir -p lib/sys rm -f lib/sys/ioctl.h-t lib/sys/ioctl.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''HAVE_SYS_IOCTL_H''@|1|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_SYS_IOCTL_H''@||g' \ -e 's/@''GNULIB_IOCTL''@/0/g' \ -e 's|@''SYS_IOCTL_H_HAVE_WINSOCK2_H''@|0|g' \ -e 's|@''SYS_IOCTL_H_HAVE_WINSOCK2_H_AND_USE_SOCKETS''@|0|g' \ -e 's|@''REPLACE_IOCTL''@|0|g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h' \ < ./lib/sys_ioctl.in.h; \ } > lib/sys/ioctl.h-t && \ mv lib/sys/ioctl.h-t lib/sys/ioctl.h /bin/mkdir -p lib/sys rm -f lib/sys/resource.h-t lib/sys/resource.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_SYS_RESOURCE_H''@||g' \ -e 's|@''HAVE_SYS_RESOURCE_H''@|1|g' \ -e 's/@''GNULIB_GETRUSAGE''@/1/g' \ -e 's/@''HAVE_GETRUSAGE''@/1/g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_ARG_NONNULL/r ./lib/arg-nonnull.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h' \ < ./lib/sys_resource.in.h; \ } > lib/sys/resource.h-t && \ mv -f lib/sys/resource.h-t lib/sys/resource.h /bin/mkdir -p lib/sys /bin/mkdir -p lib/sys rm -f lib/sys/stat.h-t lib/sys/stat.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_SYS_STAT_H''@||g' \ -e 's|@''WINDOWS_64_BIT_ST_SIZE''@|0|g' \ -e 's|@''WINDOWS_STAT_TIMESPEC''@|0|g' \ -e 's/@''GNULIB_FCHMODAT''@/0/g' \ -e 's/@''GNULIB_FSTAT''@/0/g' \ -e 's/@''GNULIB_FSTATAT''@/0/g' \ -e 's/@''GNULIB_FUTIMENS''@/0/g' \ -e 's/@''GNULIB_GETUMASK''@/0/g' \ -e 's/@''GNULIB_LCHMOD''@/0/g' \ -e 's/@''GNULIB_LSTAT''@/0/g' \ -e 's/@''GNULIB_MKDIR''@/0/g' \ -e 's/@''GNULIB_MKDIRAT''@/0/g' \ -e 's/@''GNULIB_MKFIFO''@/0/g' \ -e 's/@''GNULIB_MKFIFOAT''@/0/g' \ -e 's/@''GNULIB_MKNOD''@/0/g' \ -e 's/@''GNULIB_MKNODAT''@/0/g' \ -e 's/@''GNULIB_STAT''@/1/g' \ -e 's/@''GNULIB_UTIMENSAT''@/0/g' \ -e 's/@''GNULIB_OVERRIDES_STRUCT_STAT''@/0/g' \ -e 's/@''GNULIB_MDA_CHMOD''@/1/g' \ -e 's/@''GNULIB_MDA_MKDIR''@/1/g' \ -e 's/@''GNULIB_MDA_UMASK''@/1/g' \ -e 's|@''HAVE_FCHMODAT''@|1|g' \ -e 's|@''HAVE_FSTATAT''@|1|g' \ -e 's|@''HAVE_FUTIMENS''@|1|g' \ -e 's|@''HAVE_GETUMASK''@|1|g' \ -e 's|@''HAVE_LCHMOD''@|1|g' \ -e 's|@''HAVE_LSTAT''@|1|g' \ -e 's|@''HAVE_MKDIRAT''@|1|g' \ -e 's|@''HAVE_MKFIFO''@|1|g' \ -e 's|@''HAVE_MKFIFOAT''@|1|g' \ -e 's|@''HAVE_MKNOD''@|1|g' \ -e 's|@''HAVE_MKNODAT''@|1|g' \ -e 's|@''HAVE_UTIMENSAT''@|1|g' \ -e 's|@''REPLACE_FCHMODAT''@|0|g' \ -e 's|@''REPLACE_FSTAT''@|0|g' \ -e 's|@''REPLACE_FSTATAT''@|0|g' \ -e 's|@''REPLACE_FUTIMENS''@|0|g' \ -e 's|@''REPLACE_LSTAT''@|0|g' \ -e 's|@''REPLACE_MKDIR''@|0|g' \ -e 's|@''REPLACE_MKFIFO''@|0|g' \ -e 's|@''REPLACE_MKFIFOAT''@|0|g' \ -e 's|@''REPLACE_MKNOD''@|0|g' \ -e 's|@''REPLACE_MKNODAT''@|0|g' \ -e 's|@''REPLACE_STAT''@|0|g' \ -e 's|@''REPLACE_UTIMENSAT''@|0|g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_ARG_NONNULL/r ./lib/arg-nonnull.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h' \ < ./lib/sys_stat.in.h; \ } > lib/sys/stat.h-t && \ mv lib/sys/stat.h-t lib/sys/stat.h rm -f lib/sys/time.h-t lib/sys/time.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's/@''HAVE_SYS_TIME_H''@/1/g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_SYS_TIME_H''@||g' \ -e 's/@''GNULIB_GETTIMEOFDAY''@/1/g' \ -e 's|@''HAVE_WINSOCK2_H''@|0|g' \ -e 's/@''HAVE_GETTIMEOFDAY''@/1/g' \ -e 's/@''HAVE_STRUCT_TIMEVAL''@/1/g' \ -e 's/@''REPLACE_GETTIMEOFDAY''@/0/g' \ -e 's/@''REPLACE_STRUCT_TIMEVAL''@/0/g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_ARG_NONNULL/r ./lib/arg-nonnull.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h' \ < ./lib/sys_time.in.h; \ } > lib/sys/time.h-t && \ mv lib/sys/time.h-t lib/sys/time.h /bin/mkdir -p lib/sys /bin/mkdir -p lib/sys rm -f lib/sys/times.h-t lib/sys/times.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's/@''HAVE_SYS_TIMES_H''@/1/g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_SYS_TIMES_H''@||g' \ -e 's/@''GNULIB_TIMES''@/0/g' \ -e 's|@''HAVE_STRUCT_TMS''@|1|g' \ -e 's|@''HAVE_TIMES''@|1|g' \ -e '/definition of _GL_ARG_NONNULL/r ./lib/arg-nonnull.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h' \ < ./lib/sys_times.in.h; \ } > lib/sys/times.h-t && \ mv lib/sys/times.h-t lib/sys/times.h rm -f lib/sys/types.h-t lib/sys/types.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_SYS_TYPES_H''@||g' \ -e 's|@''WINDOWS_64_BIT_OFF_T''@|0|g' \ -e 's|@''WINDOWS_STAT_INODES''@|0|g' \ < ./lib/sys_types.in.h; \ } > lib/sys/types.h-t && \ mv lib/sys/types.h-t lib/sys/types.h /bin/mkdir -p lib/sys rm -f lib/termios.h-t lib/termios.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_TERMIOS_H''@||g' \ -e 's/@''GNULIB_TCGETSID''@/0/g' \ -e 's|@''HAVE_DECL_TCGETSID''@|1|g' \ -e 's|@''HAVE_TERMIOS_H''@|1|g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h' \ < ./lib/termios.in.h; \ } > lib/termios.h-t && \ mv lib/termios.h-t lib/termios.h rm -f lib/sys/wait.h-t lib/sys/wait.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_SYS_WAIT_H''@||g' \ -e 's/@''GNULIB_WAITPID''@/1/g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h' \ < ./lib/sys_wait.in.h; \ } > lib/sys/wait.h-t && \ mv lib/sys/wait.h-t lib/sys/wait.h rm -f lib/time.h-t lib/time.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */' && \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_TIME_H''@||g' \ -e 's/@''GNULIB_CTIME''@/0/g' \ -e 's/@''GNULIB_LOCALTIME''@/0/g' \ -e 's/@''GNULIB_MKTIME''@/0/g' \ -e 's/@''GNULIB_NANOSLEEP''@/0/g' \ -e 's/@''GNULIB_STRFTIME''@/0/g' \ -e 's/@''GNULIB_STRPTIME''@/0/g' \ -e 's/@''GNULIB_TIMEGM''@/0/g' \ -e 's/@''GNULIB_TIMESPEC_GET''@/0/g' \ -e 's/@''GNULIB_TIME_R''@/0/g' \ -e 's/@''GNULIB_TIME_RZ''@/0/g' \ -e 's/@''GNULIB_TZSET''@/0/g' \ -e 's/@''GNULIB_MDA_TZSET''@/1/g' \ -e 's|@''HAVE_DECL_LOCALTIME_R''@|1|g' \ -e 's|@''HAVE_NANOSLEEP''@|1|g' \ -e 's|@''HAVE_STRPTIME''@|1|g' \ -e 's|@''HAVE_TIMEGM''@|1|g' \ -e 's|@''HAVE_TIMESPEC_GET''@|1|g' \ -e 's|@''HAVE_TIMEZONE_T''@|0|g' \ -e 's|@''REPLACE_CTIME''@|GNULIB_PORTCHECK|g' \ -e 's|@''REPLACE_GMTIME''@|0|g' \ -e 's|@''REPLACE_LOCALTIME''@|0|g' \ -e 's|@''REPLACE_LOCALTIME_R''@|GNULIB_PORTCHECK|g' \ -e 's|@''REPLACE_MKTIME''@|GNULIB_PORTCHECK|g' \ -e 's|@''REPLACE_NANOSLEEP''@|GNULIB_PORTCHECK|g' \ -e 's|@''REPLACE_STRFTIME''@|GNULIB_PORTCHECK|g' \ -e 's|@''REPLACE_TIMEGM''@|GNULIB_PORTCHECK|g' \ -e 's|@''REPLACE_TZSET''@|GNULIB_PORTCHECK|g' \ -e 's|@''PTHREAD_H_DEFINES_STRUCT_TIMESPEC''@|0|g' \ -e 's|@''SYS_TIME_H_DEFINES_STRUCT_TIMESPEC''@|0|g' \ -e 's|@''TIME_H_DEFINES_STRUCT_TIMESPEC''@|1|g' \ -e 's|@''UNISTD_H_DEFINES_STRUCT_TIMESPEC''@|0|g' \ -e 's|@''TIME_H_DEFINES_TIME_UTC''@|1|g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_ARG_NONNULL/r ./lib/arg-nonnull.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h' \ < ./lib/time.in.h; \ } > lib/time.h-t && \ mv lib/time.h-t lib/time.h rm -f lib/unistd.h-t lib/unistd.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''HAVE_UNISTD_H''@|1|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_UNISTD_H''@||g' \ -e 's|@''WINDOWS_64_BIT_OFF_T''@|0|g' \ -e 's/@''GNULIB_ACCESS''@/1/g' \ -e 's/@''GNULIB_CHDIR''@/0/g' \ -e 's/@''GNULIB_CHOWN''@/0/g' \ -e 's/@''GNULIB_CLOSE''@/1/g' \ -e 's/@''GNULIB_COPY_FILE_RANGE''@/0/g' \ -e 's/@''GNULIB_DUP''@/0/g' \ -e 's/@''GNULIB_DUP2''@/1/g' \ -e 's/@''GNULIB_DUP3''@/0/g' \ -e 's/@''GNULIB_ENVIRON''@/1/g' \ -e 's/@''GNULIB_EUIDACCESS''@/0/g' \ -e 's/@''GNULIB_EXECL''@/0/g' \ -e 's/@''GNULIB_EXECLE''@/0/g' \ -e 's/@''GNULIB_EXECLP''@/0/g' \ -e 's/@''GNULIB_EXECV''@/0/g' \ -e 's/@''GNULIB_EXECVE''@/0/g' \ -e 's/@''GNULIB_EXECVP''@/0/g' \ -e 's/@''GNULIB_EXECVPE''@/0/g' \ -e 's/@''GNULIB_FACCESSAT''@/0/g' \ -e 's/@''GNULIB_FCHDIR''@/0/g' \ -e 's/@''GNULIB_FCHOWNAT''@/0/g' \ -e 's/@''GNULIB_FDATASYNC''@/0/g' \ -e 's/@''GNULIB_FSYNC''@/1/g' \ -e 's/@''GNULIB_FTRUNCATE''@/0/g' \ -e 's/@''GNULIB_GETCWD''@/1/g' \ -e 's/@''GNULIB_GETDOMAINNAME''@/0/g' \ -e 's/@''GNULIB_GETDTABLESIZE''@/1/g' \ -e 's/@''GNULIB_GETENTROPY''@/0/g' \ -e 's/@''GNULIB_GETGROUPS''@/0/g' \ -e 's/@''GNULIB_GETHOSTNAME''@/0/g' \ -e 's/@''GNULIB_GETLOGIN''@/0/g' \ -e 's/@''GNULIB_GETLOGIN_R''@/0/g' \ -e 's/@''GNULIB_GETOPT_POSIX''@/1/g' \ -e 's/@''GNULIB_GETPAGESIZE''@/0/g' \ -e 's/@''GNULIB_GETPASS''@/0/g' \ -e 's/@''GNULIB_GETUSERSHELL''@/0/g' \ -e 's/@''GNULIB_GROUP_MEMBER''@/0/g' \ -e 's/@''GNULIB_ISATTY''@/0/g' \ -e 's/@''GNULIB_LCHOWN''@/0/g' \ -e 's/@''GNULIB_LINK''@/0/g' \ -e 's/@''GNULIB_LINKAT''@/0/g' \ -e 's/@''GNULIB_LSEEK''@/0/g' \ -e 's/@''GNULIB_PIPE''@/1/g' \ -e 's/@''GNULIB_PIPE2''@/1/g' \ -e 's/@''GNULIB_PREAD''@/0/g' \ -e 's/@''GNULIB_PWRITE''@/0/g' \ -e 's/@''GNULIB_READ''@/0/g' \ -e 's/@''GNULIB_READLINK''@/1/g' \ -e 's/@''GNULIB_READLINKAT''@/0/g' \ -e 's/@''GNULIB_RMDIR''@/0/g' \ -e 's/@''GNULIB_SETHOSTNAME''@/0/g' \ -e 's/@''GNULIB_SLEEP''@/0/g' \ -e 's/@''GNULIB_SYMLINK''@/0/g' \ -e 's/@''GNULIB_SYMLINKAT''@/0/g' \ -e 's/@''GNULIB_TRUNCATE''@/0/g' \ -e 's/@''GNULIB_TTYNAME_R''@/0/g' \ -e 's/@''GNULIB_UNISTD_H_GETOPT''@/00/g' \ -e 's/@''GNULIB_UNISTD_H_NONBLOCKING''@/0/g' \ -e 's/@''GNULIB_UNISTD_H_SIGPIPE''@/0/g' \ -e 's/@''GNULIB_UNLINK''@/1/g' \ -e 's/@''GNULIB_UNLINKAT''@/0/g' \ -e 's/@''GNULIB_USLEEP''@/0/g' \ -e 's/@''GNULIB_WRITE''@/0/g' \ -e 's/@''GNULIB_MDA_ACCESS''@/1/g' \ -e 's/@''GNULIB_MDA_CHDIR''@/1/g' \ -e 's/@''GNULIB_MDA_CLOSE''@/1/g' \ -e 's/@''GNULIB_MDA_DUP''@/1/g' \ -e 's/@''GNULIB_MDA_DUP2''@/1/g' \ -e 's/@''GNULIB_MDA_EXECL''@/1/g' \ -e 's/@''GNULIB_MDA_EXECLE''@/1/g' \ -e 's/@''GNULIB_MDA_EXECLP''@/1/g' \ -e 's/@''GNULIB_MDA_EXECV''@/1/g' \ -e 's/@''GNULIB_MDA_EXECVE''@/1/g' \ -e 's/@''GNULIB_MDA_EXECVP''@/1/g' \ -e 's/@''GNULIB_MDA_EXECVPE''@/1/g' \ -e 's/@''GNULIB_MDA_GETCWD''@/1/g' \ -e 's/@''GNULIB_MDA_GETPID''@/1/g' \ -e 's/@''GNULIB_MDA_ISATTY''@/1/g' \ -e 's/@''GNULIB_MDA_LSEEK''@/1/g' \ -e 's/@''GNULIB_MDA_READ''@/1/g' \ -e 's/@''GNULIB_MDA_RMDIR''@/1/g' \ -e 's/@''GNULIB_MDA_SWAB''@/1/g' \ -e 's/@''GNULIB_MDA_UNLINK''@/1/g' \ -e 's/@''GNULIB_MDA_WRITE''@/1/g' \ < ./lib/unistd.in.h | \ sed -e 's|@''HAVE_CHOWN''@|1|g' \ -e 's|@''HAVE_COPY_FILE_RANGE''@|1|g' \ -e 's|@''HAVE_DUP3''@|1|g' \ -e 's|@''HAVE_EUIDACCESS''@|1|g' \ -e 's|@''HAVE_EXECVPE''@|1|g' \ -e 's|@''HAVE_FACCESSAT''@|1|g' \ -e 's|@''HAVE_FCHDIR''@|1|g' \ -e 's|@''HAVE_FCHOWNAT''@|1|g' \ -e 's|@''HAVE_FDATASYNC''@|1|g' \ -e 's|@''HAVE_FSYNC''@|1|g' \ -e 's|@''HAVE_FTRUNCATE''@|1|g' \ -e 's|@''HAVE_GETDTABLESIZE''@|1|g' \ -e 's|@''HAVE_GETENTROPY''@|1|g' \ -e 's|@''HAVE_GETGROUPS''@|1|g' \ -e 's|@''HAVE_GETHOSTNAME''@|1|g' \ -e 's|@''HAVE_GETPAGESIZE''@|1|g' \ -e 's|@''HAVE_GETPASS''@|1|g' \ -e 's|@''HAVE_GROUP_MEMBER''@|1|g' \ -e 's|@''HAVE_LCHOWN''@|1|g' \ -e 's|@''HAVE_LINK''@|1|g' \ -e 's|@''HAVE_LINKAT''@|1|g' \ -e 's|@''HAVE_PIPE''@|1|g' \ -e 's|@''HAVE_PIPE2''@|1|g' \ -e 's|@''HAVE_PREAD''@|1|g' \ -e 's|@''HAVE_PWRITE''@|1|g' \ -e 's|@''HAVE_READLINK''@|1|g' \ -e 's|@''HAVE_READLINKAT''@|1|g' \ -e 's|@''HAVE_SETHOSTNAME''@|1|g' \ -e 's|@''HAVE_SLEEP''@|1|g' \ -e 's|@''HAVE_SYMLINK''@|1|g' \ -e 's|@''HAVE_SYMLINKAT''@|1|g' \ -e 's|@''HAVE_UNLINKAT''@|1|g' \ -e 's|@''HAVE_USLEEP''@|1|g' \ -e 's|@''HAVE_DECL_ENVIRON''@|1|g' \ -e 's|@''HAVE_DECL_EXECVPE''@|1|g' \ -e 's|@''HAVE_DECL_FCHDIR''@|1|g' \ -e 's|@''HAVE_DECL_FDATASYNC''@|1|g' \ -e 's|@''HAVE_DECL_GETDOMAINNAME''@|1|g' \ -e 's|@''HAVE_DECL_GETLOGIN''@|1|g' \ -e 's|@''HAVE_DECL_GETLOGIN_R''@|1|g' \ -e 's|@''HAVE_DECL_GETPAGESIZE''@|1|g' \ -e 's|@''HAVE_DECL_GETUSERSHELL''@|1|g' \ -e 's|@''HAVE_DECL_SETHOSTNAME''@|1|g' \ -e 's|@''HAVE_DECL_TRUNCATE''@|1|g' \ -e 's|@''HAVE_DECL_TTYNAME_R''@|1|g' \ -e 's|@''HAVE_OS_H''@|0|g' \ -e 's|@''HAVE_SYS_PARAM_H''@|0|g' \ | \ sed -e 's|@''REPLACE_ACCESS''@|0|g' \ -e 's|@''REPLACE_CHOWN''@|0|g' \ -e 's|@''REPLACE_CLOSE''@|0|g' \ -e 's|@''REPLACE_DUP''@|0|g' \ -e 's|@''REPLACE_DUP2''@|0|g' \ -e 's|@''REPLACE_EXECL''@|0|g' \ -e 's|@''REPLACE_EXECLE''@|0|g' \ -e 's|@''REPLACE_EXECLP''@|0|g' \ -e 's|@''REPLACE_EXECV''@|0|g' \ -e 's|@''REPLACE_EXECVE''@|0|g' \ -e 's|@''REPLACE_EXECVP''@|0|g' \ -e 's|@''REPLACE_EXECVPE''@|0|g' \ -e 's|@''REPLACE_FACCESSAT''@|0|g' \ -e 's|@''REPLACE_FCHOWNAT''@|0|g' \ -e 's|@''REPLACE_FTRUNCATE''@|0|g' \ -e 's|@''REPLACE_GETCWD''@|0|g' \ -e 's|@''REPLACE_GETDOMAINNAME''@|0|g' \ -e 's|@''REPLACE_GETDTABLESIZE''@|0|g' \ -e 's|@''REPLACE_GETLOGIN_R''@|0|g' \ -e 's|@''REPLACE_GETGROUPS''@|0|g' \ -e 's|@''REPLACE_GETPAGESIZE''@|0|g' \ -e 's|@''REPLACE_GETPASS''@|0|g' \ -e 's|@''REPLACE_ISATTY''@|0|g' \ -e 's|@''REPLACE_LCHOWN''@|0|g' \ -e 's|@''REPLACE_LINK''@|0|g' \ -e 's|@''REPLACE_LINKAT''@|0|g' \ -e 's|@''REPLACE_LSEEK''@|0|g' \ -e 's|@''REPLACE_PREAD''@|0|g' \ -e 's|@''REPLACE_PWRITE''@|0|g' \ -e 's|@''REPLACE_READ''@|0|g' \ -e 's|@''REPLACE_READLINK''@|0|g' \ -e 's|@''REPLACE_READLINKAT''@|0|g' \ -e 's|@''REPLACE_RMDIR''@|0|g' \ -e 's|@''REPLACE_SLEEP''@|0|g' \ -e 's|@''REPLACE_SYMLINK''@|0|g' \ -e 's|@''REPLACE_SYMLINKAT''@|0|g' \ -e 's|@''REPLACE_TRUNCATE''@|0|g' \ -e 's|@''REPLACE_TTYNAME_R''@|0|g' \ -e 's|@''REPLACE_UNLINK''@|0|g' \ -e 's|@''REPLACE_UNLINKAT''@|0|g' \ -e 's|@''REPLACE_USLEEP''@|0|g' \ -e 's|@''REPLACE_WRITE''@|0|g' \ -e 's|@''UNISTD_H_HAVE_SYS_RANDOM_H''@|0|g' \ -e 's|@''UNISTD_H_HAVE_WINSOCK2_H''@|0|g' \ -e 's|@''UNISTD_H_HAVE_WINSOCK2_H_AND_USE_SOCKETS''@|0|g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_ARG_NONNULL/r ./lib/arg-nonnull.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h'; \ } > lib/unistd.h-t && \ mv lib/unistd.h-t lib/unistd.h rm -f lib/unistr.h-t lib/unistr.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ cat ./lib/unistr.in.h; \ } > lib/unistr.h-t && \ mv -f lib/unistr.h-t lib/unistr.h rm -f lib/unitypes.h-t lib/unitypes.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ cat ./lib/unitypes.in.h; \ } > lib/unitypes.h-t && \ mv -f lib/unitypes.h-t lib/unitypes.h rm -f lib/uniwidth.h-t lib/uniwidth.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ cat ./lib/uniwidth.in.h; \ } > lib/uniwidth.h-t && \ mv -f lib/uniwidth.h-t lib/uniwidth.h rm -f lib/wchar.h-t lib/wchar.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''HAVE_FEATURES_H''@|1|g' \ -e 's|@''NEXT_WCHAR_H''@||g' \ -e 's|@''HAVE_WCHAR_H''@|1|g' \ -e 's/@''HAVE_CRTDEFS_H''@/0/g' \ -e 's/@''GNULIBHEADERS_OVERRIDE_WINT_T''@/0/g' \ -e 's/@''GNULIB_BTOWC''@/0/g' \ -e 's/@''GNULIB_WCTOB''@/0/g' \ -e 's/@''GNULIB_MBSINIT''@/1/g' \ -e 's/@''GNULIB_MBRTOWC''@/1/g' \ -e 's/@''GNULIB_MBRLEN''@/0/g' \ -e 's/@''GNULIB_MBSRTOWCS''@/0/g' \ -e 's/@''GNULIB_MBSNRTOWCS''@/0/g' \ -e 's/@''GNULIB_WCRTOMB''@/0/g' \ -e 's/@''GNULIB_WCSRTOMBS''@/0/g' \ -e 's/@''GNULIB_WCSNRTOMBS''@/0/g' \ -e 's/@''GNULIB_WCWIDTH''@/1/g' \ -e 's/@''GNULIB_WMEMCHR''@/0/g' \ -e 's/@''GNULIB_WMEMCMP''@/0/g' \ -e 's/@''GNULIB_WMEMCPY''@/0/g' \ -e 's/@''GNULIB_WMEMMOVE''@/0/g' \ -e 's/@''GNULIB_WMEMPCPY''@/0/g' \ -e 's/@''GNULIB_WMEMSET''@/0/g' \ -e 's/@''GNULIB_WCSLEN''@/0/g' \ -e 's/@''GNULIB_WCSNLEN''@/0/g' \ -e 's/@''GNULIB_WCSCPY''@/0/g' \ -e 's/@''GNULIB_WCPCPY''@/0/g' \ -e 's/@''GNULIB_WCSNCPY''@/0/g' \ -e 's/@''GNULIB_WCPNCPY''@/0/g' \ -e 's/@''GNULIB_WCSCAT''@/0/g' \ -e 's/@''GNULIB_WCSNCAT''@/0/g' \ -e 's/@''GNULIB_WCSCMP''@/0/g' \ -e 's/@''GNULIB_WCSNCMP''@/0/g' \ -e 's/@''GNULIB_WCSCASECMP''@/0/g' \ -e 's/@''GNULIB_WCSNCASECMP''@/0/g' \ -e 's/@''GNULIB_WCSCOLL''@/0/g' \ -e 's/@''GNULIB_WCSXFRM''@/0/g' \ -e 's/@''GNULIB_WCSDUP''@/0/g' \ -e 's/@''GNULIB_WCSCHR''@/0/g' \ -e 's/@''GNULIB_WCSRCHR''@/0/g' \ -e 's/@''GNULIB_WCSCSPN''@/0/g' \ -e 's/@''GNULIB_WCSSPN''@/0/g' \ -e 's/@''GNULIB_WCSPBRK''@/0/g' \ -e 's/@''GNULIB_WCSSTR''@/0/g' \ -e 's/@''GNULIB_WCSTOK''@/0/g' \ -e 's/@''GNULIB_WCSWIDTH''@/0/g' \ -e 's/@''GNULIB_WCSFTIME''@/0/g' \ -e 's/@''GNULIB_MDA_WCSDUP''@/1/g' \ < ./lib/wchar.in.h | \ sed -e 's|@''HAVE_WINT_T''@|1|g' \ -e 's|@''HAVE_BTOWC''@|1|g' \ -e 's|@''HAVE_MBSINIT''@|1|g' \ -e 's|@''HAVE_MBRTOWC''@|1|g' \ -e 's|@''HAVE_MBRLEN''@|1|g' \ -e 's|@''HAVE_MBSRTOWCS''@|1|g' \ -e 's|@''HAVE_MBSNRTOWCS''@|1|g' \ -e 's|@''HAVE_WCRTOMB''@|1|g' \ -e 's|@''HAVE_WCSRTOMBS''@|1|g' \ -e 's|@''HAVE_WCSNRTOMBS''@|1|g' \ -e 's|@''HAVE_WMEMCHR''@|1|g' \ -e 's|@''HAVE_WMEMCMP''@|1|g' \ -e 's|@''HAVE_WMEMCPY''@|1|g' \ -e 's|@''HAVE_WMEMMOVE''@|1|g' \ -e 's|@''HAVE_WMEMPCPY''@|1|g' \ -e 's|@''HAVE_WMEMSET''@|1|g' \ -e 's|@''HAVE_WCSLEN''@|1|g' \ -e 's|@''HAVE_WCSNLEN''@|1|g' \ -e 's|@''HAVE_WCSCPY''@|1|g' \ -e 's|@''HAVE_WCPCPY''@|1|g' \ -e 's|@''HAVE_WCSNCPY''@|1|g' \ -e 's|@''HAVE_WCPNCPY''@|1|g' \ -e 's|@''HAVE_WCSCAT''@|1|g' \ -e 's|@''HAVE_WCSNCAT''@|1|g' \ -e 's|@''HAVE_WCSCMP''@|1|g' \ -e 's|@''HAVE_WCSNCMP''@|1|g' \ -e 's|@''HAVE_WCSCASECMP''@|1|g' \ -e 's|@''HAVE_WCSNCASECMP''@|1|g' \ -e 's|@''HAVE_WCSCOLL''@|1|g' \ -e 's|@''HAVE_WCSXFRM''@|1|g' \ -e 's|@''HAVE_WCSDUP''@|1|g' \ -e 's|@''HAVE_WCSCHR''@|1|g' \ -e 's|@''HAVE_WCSRCHR''@|1|g' \ -e 's|@''HAVE_WCSCSPN''@|1|g' \ -e 's|@''HAVE_WCSSPN''@|1|g' \ -e 's|@''HAVE_WCSPBRK''@|1|g' \ -e 's|@''HAVE_WCSSTR''@|1|g' \ -e 's|@''HAVE_WCSTOK''@|1|g' \ -e 's|@''HAVE_WCSWIDTH''@|1|g' \ -e 's|@''HAVE_WCSFTIME''@|1|g' \ -e 's|@''HAVE_DECL_WCTOB''@|1|g' \ -e 's|@''HAVE_DECL_WCSDUP''@|1|g' \ -e 's|@''HAVE_DECL_WCWIDTH''@|1|g' \ | \ sed -e 's|@''REPLACE_MBSTATE_T''@|0|g' \ -e 's|@''REPLACE_BTOWC''@|0|g' \ -e 's|@''REPLACE_WCTOB''@|0|g' \ -e 's|@''REPLACE_FREE''@|0|g' \ -e 's|@''REPLACE_MBSINIT''@|0|g' \ -e 's|@''REPLACE_MBRTOWC''@|1|g' \ -e 's|@''REPLACE_MBRLEN''@|0|g' \ -e 's|@''REPLACE_MBSRTOWCS''@|0|g' \ -e 's|@''REPLACE_MBSNRTOWCS''@|0|g' \ -e 's|@''REPLACE_WCRTOMB''@|0|g' \ -e 's|@''REPLACE_WCSRTOMBS''@|0|g' \ -e 's|@''REPLACE_WCSNRTOMBS''@|0|g' \ -e 's|@''REPLACE_WCWIDTH''@|0|g' \ -e 's|@''REPLACE_WCSWIDTH''@|0|g' \ -e 's|@''REPLACE_WCSFTIME''@|0|g' \ -e 's|@''REPLACE_WCSTOK''@|0|g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_ARG_NONNULL/r ./lib/arg-nonnull.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h'; \ } > lib/wchar.h-t && \ mv lib/wchar.h-t lib/wchar.h rm -f lib/wctype.h-t lib/wctype.h && \ { echo '/* DO NOT EDIT! GENERATED AUTOMATICALLY! */'; \ sed -e 's|@''GUARD_PREFIX''@|GL|g' \ -e 's/@''HAVE_WCTYPE_H''@/1/g' \ -e 's|@''INCLUDE_NEXT''@|include_next|g' \ -e 's|@''PRAGMA_SYSTEM_HEADER''@|#pragma GCC system_header|g' \ -e 's|@''PRAGMA_COLUMNS''@||g' \ -e 's|@''NEXT_WCTYPE_H''@||g' \ -e 's/@''HAVE_CRTDEFS_H''@/0/g' \ -e 's/@''GNULIBHEADERS_OVERRIDE_WINT_T''@/0/g' \ -e 's/@''GNULIB_ISWBLANK''@/1/g' \ -e 's/@''GNULIB_ISWDIGIT''@/1/g' \ -e 's/@''GNULIB_ISWXDIGIT''@/1/g' \ -e 's/@''GNULIB_WCTYPE''@/0/g' \ -e 's/@''GNULIB_ISWCTYPE''@/0/g' \ -e 's/@''GNULIB_WCTRANS''@/0/g' \ -e 's/@''GNULIB_TOWCTRANS''@/0/g' \ -e 's/@''HAVE_ISWBLANK''@/1/g' \ -e 's/@''HAVE_ISWCNTRL''@/1/g' \ -e 's/@''HAVE_WCTYPE_T''@/1/g' \ -e 's/@''HAVE_WCTRANS_T''@/1/g' \ -e 's/@''HAVE_WINT_T''@/1/g' \ -e 's/@''REPLACE_ISWBLANK''@/0/g' \ -e 's/@''REPLACE_ISWDIGIT''@/0/g' \ -e 's/@''REPLACE_ISWXDIGIT''@/0/g' \ -e 's/@''REPLACE_ISWCNTRL''@/0/g' \ -e 's/@''REPLACE_TOWLOWER''@/0/g' \ -e '/definitions of _GL_FUNCDECL_RPL/r ./lib/c++defs.h' \ -e '/definition of _GL_WARN_ON_USE/r ./lib/warn-on-use.h' \ < ./lib/wctype.in.h; \ } > lib/wctype.h-t && \ mv lib/wctype.h-t lib/wctype.h echo 3.8.2 > .version-t && mv .version-t .version make all-recursive make[2]: Entering directory '/build/bison-3.8.2+dfsg' Making all in po make[3]: Entering directory '/build/bison-3.8.2+dfsg/po' make[3]: Nothing to be done for 'all'. make[3]: Leaving directory '/build/bison-3.8.2+dfsg/po' Making all in runtime-po make[3]: Entering directory '/build/bison-3.8.2+dfsg/runtime-po' make[3]: Nothing to be done for 'all'. make[3]: Leaving directory '/build/bison-3.8.2+dfsg/runtime-po' Making all in gnulib-po make[3]: Entering directory '/build/bison-3.8.2+dfsg/gnulib-po' make[3]: Nothing to be done for 'all'. make[3]: Leaving directory '/build/bison-3.8.2+dfsg/gnulib-po' Making all in . make[3]: Entering directory '/build/bison-3.8.2+dfsg' gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-bitsetv.o `test -f 'lib/bitsetv.c' || echo './'`lib/bitsetv.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-c-ctype.o `test -f 'lib/c-ctype.c' || echo './'`lib/c-ctype.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-c-strcasecmp.o `test -f 'lib/c-strcasecmp.c' || echo './'`lib/c-strcasecmp.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-c-strncasecmp.o `test -f 'lib/c-strncasecmp.c' || echo './'`lib/c-strncasecmp.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-canonicalize.o `test -f 'lib/canonicalize.c' || echo './'`lib/canonicalize.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-careadlinkat.o `test -f 'lib/careadlinkat.c' || echo './'`lib/careadlinkat.c lib/canonicalize.c: In function 'canonicalize_filename_mode': lib/canonicalize.c:484:5: warning: #warning "GCC might issue a bogus -Wreturn-local-addr warning here." [-Wcpp] 484 | #warning "GCC might issue a bogus -Wreturn-local-addr warning here." | ^~~~~~~ lib/canonicalize.c:485:5: warning: #warning "See ." [-Wcpp] 485 | #warning "See ." | ^~~~~~~ gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-cloexec.o `test -f 'lib/cloexec.c' || echo './'`lib/cloexec.c lib/careadlinkat.c: In function 'careadlinkat': lib/careadlinkat.c:178:5: warning: #warning "GCC might issue a bogus -Wreturn-local-addr warning here." [-Wcpp] 178 | #warning "GCC might issue a bogus -Wreturn-local-addr warning here." | ^~~~~~~ lib/careadlinkat.c:179:5: warning: #warning "See ." [-Wcpp] 179 | #warning "See ." | ^~~~~~~ lib/careadlinkat.c:182:10: warning: function may return address of local variable [-Wreturn-local-addr] 182 | return readlink_stk (fd, filename, buffer, buffer_size, alloc, | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 183 | preadlinkat, stack_buf); | ~~~~~~~~~~~~~~~~~~~~~~~ lib/careadlinkat.c:181:8: note: declared here 181 | char stack_buf[STACK_BUF_SIZE]; | ^~~~~~~~~ gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-close-stream.o `test -f 'lib/close-stream.c' || echo './'`lib/close-stream.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-closeout.o `test -f 'lib/closeout.c' || echo './'`lib/closeout.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-concat-filename.o `test -f 'lib/concat-filename.c' || echo './'`lib/concat-filename.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-dirname.o `test -f 'lib/dirname.c' || echo './'`lib/dirname.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-basename.o `test -f 'lib/basename.c' || echo './'`lib/basename.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-dirname-lgpl.o `test -f 'lib/dirname-lgpl.c' || echo './'`lib/dirname-lgpl.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-stripslash.o `test -f 'lib/stripslash.c' || echo './'`lib/stripslash.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-execute.o `test -f 'lib/execute.c' || echo './'`lib/execute.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-exitfail.o `test -f 'lib/exitfail.c' || echo './'`lib/exitfail.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-fatal-signal.o `test -f 'lib/fatal-signal.c' || echo './'`lib/fatal-signal.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-fd-safer-flag.o `test -f 'lib/fd-safer-flag.c' || echo './'`lib/fd-safer-flag.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-dup-safer-flag.o `test -f 'lib/dup-safer-flag.c' || echo './'`lib/dup-safer-flag.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-file-set.o `test -f 'lib/file-set.c' || echo './'`lib/file-set.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-findprog-in.o `test -f 'lib/findprog-in.c' || echo './'`lib/findprog-in.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-fopen-safer.o `test -f 'lib/fopen-safer.c' || echo './'`lib/fopen-safer.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-fstrcmp.o `test -f 'lib/fstrcmp.c' || echo './'`lib/fstrcmp.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-gethrxtime.o `test -f 'lib/gethrxtime.c' || echo './'`lib/gethrxtime.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-xtime.o `test -f 'lib/xtime.c' || echo './'`lib/xtime.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-getprogname.o `test -f 'lib/getprogname.c' || echo './'`lib/getprogname.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-gettime.o `test -f 'lib/gettime.c' || echo './'`lib/gettime.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-hard-locale.o `test -f 'lib/hard-locale.c' || echo './'`lib/hard-locale.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-hash.o `test -f 'lib/hash.c' || echo './'`lib/hash.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-gl_hash_map.o `test -f 'lib/gl_hash_map.c' || echo './'`lib/gl_hash_map.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-hash-pjw.o `test -f 'lib/hash-pjw.c' || echo './'`lib/hash-pjw.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-hash-triple-simple.o `test -f 'lib/hash-triple-simple.c' || echo './'`lib/hash-triple-simple.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-ialloc.o `test -f 'lib/ialloc.c' || echo './'`lib/ialloc.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-integer_length.o `test -f 'lib/integer_length.c' || echo './'`lib/integer_length.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-integer_length_l.o `test -f 'lib/integer_length_l.c' || echo './'`lib/integer_length_l.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-gl_linked_list.o `test -f 'lib/gl_linked_list.c' || echo './'`lib/gl_linked_list.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-gl_list.o `test -f 'lib/gl_list.c' || echo './'`lib/gl_list.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-localcharset.o `test -f 'lib/localcharset.c' || echo './'`lib/localcharset.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-gl_map.o `test -f 'lib/gl_map.c' || echo './'`lib/gl_map.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-math.o `test -f 'lib/math.c' || echo './'`lib/math.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-mbchar.o `test -f 'lib/mbchar.c' || echo './'`lib/mbchar.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-mbfile.o `test -f 'lib/mbfile.c' || echo './'`lib/mbfile.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-mbswidth.o `test -f 'lib/mbswidth.c' || echo './'`lib/mbswidth.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-gl_oset.o `test -f 'lib/gl_oset.c' || echo './'`lib/gl_oset.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-pipe2.o `test -f 'lib/pipe2.c' || echo './'`lib/pipe2.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-pipe2-safer.o `test -f 'lib/pipe2-safer.c' || echo './'`lib/pipe2-safer.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-printf-frexp.o `test -f 'lib/printf-frexp.c' || echo './'`lib/printf-frexp.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-printf-frexpl.o `test -f 'lib/printf-frexpl.c' || echo './'`lib/printf-frexpl.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-progname.o `test -f 'lib/progname.c' || echo './'`lib/progname.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-quotearg.o `test -f 'lib/quotearg.c' || echo './'`lib/quotearg.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-gl_rbtree_oset.o `test -f 'lib/gl_rbtree_oset.c' || echo './'`lib/gl_rbtree_oset.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-gl_rbtreehash_list.o `test -f 'lib/gl_rbtreehash_list.c' || echo './'`lib/gl_rbtreehash_list.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-setlocale_null.o `test -f 'lib/setlocale_null.c' || echo './'`lib/setlocale_null.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-sig-handler.o `test -f 'lib/sig-handler.c' || echo './'`lib/sig-handler.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-spawn-pipe.o `test -f 'lib/spawn-pipe.c' || echo './'`lib/spawn-pipe.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/glthread/libbison_a-threadlib.o `test -f 'lib/glthread/threadlib.c' || echo './'`lib/glthread/threadlib.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-timespec.o `test -f 'lib/timespec.c' || echo './'`lib/timespec.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-timevar.o `test -f 'lib/timevar.c' || echo './'`lib/timevar.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/glthread/libbison_a-tls.o `test -f 'lib/glthread/tls.c' || echo './'`lib/glthread/tls.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-unicodeio.o `test -f 'lib/unicodeio.c' || echo './'`lib/unicodeio.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-unistd.o `test -f 'lib/unistd.c' || echo './'`lib/unistd.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-dup-safer.o `test -f 'lib/dup-safer.c' || echo './'`lib/dup-safer.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-fd-safer.o `test -f 'lib/fd-safer.c' || echo './'`lib/fd-safer.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-pipe-safer.o `test -f 'lib/pipe-safer.c' || echo './'`lib/pipe-safer.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-wait-process.o `test -f 'lib/wait-process.c' || echo './'`lib/wait-process.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-wctype-h.o `test -f 'lib/wctype-h.c' || echo './'`lib/wctype-h.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-xmalloc.o `test -f 'lib/xmalloc.c' || echo './'`lib/xmalloc.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-xalloc-die.o `test -f 'lib/xalloc-die.c' || echo './'`lib/xalloc-die.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-xconcat-filename.o `test -f 'lib/xconcat-filename.c' || echo './'`lib/xconcat-filename.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-xhash.o `test -f 'lib/xhash.c' || echo './'`lib/xhash.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-gl_xlist.o `test -f 'lib/gl_xlist.c' || echo './'`lib/gl_xlist.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-gl_xmap.o `test -f 'lib/gl_xmap.c' || echo './'`lib/gl_xmap.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-xmemdup0.o `test -f 'lib/xmemdup0.c' || echo './'`lib/xmemdup0.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-xreadlink.o `test -f 'lib/xreadlink.c' || echo './'`lib/xreadlink.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-xsize.o `test -f 'lib/xsize.c' || echo './'`lib/xsize.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-xstrndup.o `test -f 'lib/xstrndup.c' || echo './'`lib/xstrndup.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-get-errno.o `test -f 'lib/get-errno.c' || echo './'`lib/get-errno.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-path-join.o `test -f 'lib/path-join.c' || echo './'`lib/path-join.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-asnprintf.o `test -f 'lib/asnprintf.c' || echo './'`lib/asnprintf.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-asprintf.o `test -f 'lib/asprintf.c' || echo './'`lib/asprintf.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-fcntl.o `test -f 'lib/fcntl.c' || echo './'`lib/fcntl.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-fprintf.o `test -f 'lib/fprintf.c' || echo './'`lib/fprintf.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-fseterr.o `test -f 'lib/fseterr.c' || echo './'`lib/fseterr.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-mbrtowc.o `test -f 'lib/mbrtowc.c' || echo './'`lib/mbrtowc.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-obstack.o `test -f 'lib/obstack.c' || echo './'`lib/obstack.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-printf.o `test -f 'lib/printf.c' || echo './'`lib/printf.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-printf-args.o `test -f 'lib/printf-args.c' || echo './'`lib/printf-args.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-printf-parse.o `test -f 'lib/printf-parse.c' || echo './'`lib/printf-parse.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-readline.o `test -f 'lib/readline.c' || echo './'`lib/readline.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-snprintf.o `test -f 'lib/snprintf.c' || echo './'`lib/snprintf.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-spawn_faction_addchdir.o `test -f 'lib/spawn_faction_addchdir.c' || echo './'`lib/spawn_faction_addchdir.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-sprintf.o `test -f 'lib/sprintf.c' || echo './'`lib/sprintf.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-vasnprintf.o `test -f 'lib/vasnprintf.c' || echo './'`lib/vasnprintf.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-vasprintf.o `test -f 'lib/vasprintf.c' || echo './'`lib/vasprintf.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-vfprintf.o `test -f 'lib/vfprintf.c' || echo './'`lib/vfprintf.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-vsnprintf.o `test -f 'lib/vsnprintf.c' || echo './'`lib/vsnprintf.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-vsprintf.o `test -f 'lib/vsprintf.c' || echo './'`lib/vsprintf.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/main.o lib/main.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/yyerror.o lib/yyerror.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-AnnotationList.o `test -f 'src/AnnotationList.c' || echo './'`src/AnnotationList.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-InadequacyList.o `test -f 'src/InadequacyList.c' || echo './'`src/InadequacyList.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-Sbitset.o `test -f 'src/Sbitset.c' || echo './'`src/Sbitset.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-assoc.o `test -f 'src/assoc.c' || echo './'`src/assoc.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-closure.o `test -f 'src/closure.c' || echo './'`src/closure.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-complain.o `test -f 'src/complain.c' || echo './'`src/complain.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-conflicts.o `test -f 'src/conflicts.c' || echo './'`src/conflicts.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-counterexample.o `test -f 'src/counterexample.c' || echo './'`src/counterexample.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-derivation.o `test -f 'src/derivation.c' || echo './'`src/derivation.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-derives.o `test -f 'src/derives.c' || echo './'`src/derives.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-files.o `test -f 'src/files.c' || echo './'`src/files.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-fixits.o `test -f 'src/fixits.c' || echo './'`src/fixits.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-getargs.o `test -f 'src/getargs.c' || echo './'`src/getargs.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-glyphs.o `test -f 'src/glyphs.c' || echo './'`src/glyphs.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-gram.o `test -f 'src/gram.c' || echo './'`src/gram.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-graphviz.o `test -f 'src/graphviz.c' || echo './'`src/graphviz.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-ielr.o `test -f 'src/ielr.c' || echo './'`src/ielr.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-lalr.o `test -f 'src/lalr.c' || echo './'`src/lalr.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-location.o `test -f 'src/location.c' || echo './'`src/location.c src/lalr.c: In function 'set_goto_map': src/lalr.c:152:26: warning: format '%ld' expects argument of type 'long int', but argument 5 has type 'goto_number' {aka 'unsigned int'} [-Wformat=] 152 | fprintf (stderr, "goto_map[%d (%s)] = %ld .. %ld\n", | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 153 | i, symbols[ntokens + i]->tag, 154 | goto_map[i], goto_map[i+1] - 1); | ~~~~~~~~~~~ | | | goto_number {aka unsigned int} src/lalr.c:152:26: warning: format '%ld' expects argument of type 'long int', but argument 6 has type 'goto_number' {aka 'unsigned int'} [-Wformat=] 152 | fprintf (stderr, "goto_map[%d (%s)] = %ld .. %ld\n", | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 153 | i, symbols[ntokens + i]->tag, 154 | goto_map[i], goto_map[i+1] - 1); | ~~~~~~~~~~~~~~~~~ | | | goto_number {aka unsigned int} gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-lr0.o `test -f 'src/lr0.c' || echo './'`src/lr0.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-lssi.o `test -f 'src/lssi.c' || echo './'`src/lssi.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-main.o `test -f 'src/main.c' || echo './'`src/main.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-muscle-tab.o `test -f 'src/muscle-tab.c' || echo './'`src/muscle-tab.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-named-ref.o `test -f 'src/named-ref.c' || echo './'`src/named-ref.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-nullable.o `test -f 'src/nullable.c' || echo './'`src/nullable.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-output.o `test -f 'src/output.c' || echo './'`src/output.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-parse-gram.o `test -f 'src/parse-gram.c' || echo './'`src/parse-gram.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-parse-simulation.o `test -f 'src/parse-simulation.c' || echo './'`src/parse-simulation.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-print-graph.o `test -f 'src/print-graph.c' || echo './'`src/print-graph.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-print-xml.o `test -f 'src/print-xml.c' || echo './'`src/print-xml.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-print.o `test -f 'src/print.c' || echo './'`src/print.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-reader.o `test -f 'src/reader.c' || echo './'`src/reader.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-reduce.o `test -f 'src/reduce.c' || echo './'`src/reduce.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-relation.o `test -f 'src/relation.c' || echo './'`src/relation.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-scan-code-c.o `test -f 'src/scan-code-c.c' || echo './'`src/scan-code-c.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-scan-gram-c.o `test -f 'src/scan-gram-c.c' || echo './'`src/scan-gram-c.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-scan-skel-c.o `test -f 'src/scan-skel-c.c' || echo './'`src/scan-skel-c.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-state.o `test -f 'src/state.c' || echo './'`src/state.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-state-item.o `test -f 'src/state-item.c' || echo './'`src/state-item.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-strversion.o `test -f 'src/strversion.c' || echo './'`src/strversion.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-symlist.o `test -f 'src/symlist.c' || echo './'`src/symlist.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-symtab.o `test -f 'src/symtab.c' || echo './'`src/symtab.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-tables.o `test -f 'src/tables.c' || echo './'`src/tables.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DINSTALLDIR=\"/usr/bin\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o src/bison-uniqstr.o `test -f 'src/uniqstr.c' || echo './'`src/uniqstr.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-allocator.o `test -f 'lib/allocator.c' || echo './'`lib/allocator.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-areadlink.o `test -f 'lib/areadlink.c' || echo './'`lib/areadlink.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-argmatch.o `test -f 'lib/argmatch.c' || echo './'`lib/argmatch.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-gl_array_list.o `test -f 'lib/gl_array_list.c' || echo './'`lib/gl_array_list.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-basename-lgpl.o `test -f 'lib/basename-lgpl.c' || echo './'`lib/basename-lgpl.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-binary-io.o `test -f 'lib/binary-io.c' || echo './'`lib/binary-io.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-bitrotate.o `test -f 'lib/bitrotate.c' || echo './'`lib/bitrotate.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/libbison_a-bitset.o `test -f 'lib/bitset.c' || echo './'`lib/bitset.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/bitset/libbison_a-array.o `test -f 'lib/bitset/array.c' || echo './'`lib/bitset/array.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/bitset/libbison_a-stats.o `test -f 'lib/bitset/stats.c' || echo './'`lib/bitset/stats.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/bitset/libbison_a-table.o `test -f 'lib/bitset/table.c' || echo './'`lib/bitset/table.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/bitset/libbison_a-list.o `test -f 'lib/bitset/list.c' || echo './'`lib/bitset/list.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/bitset/libbison_a-vector.o `test -f 'lib/bitset/vector.c' || echo './'`lib/bitset/vector.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/glthread/libbison_a-lock.o `test -f 'lib/glthread/lock.c' || echo './'`lib/glthread/lock.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/malloc/libbison_a-scratch_buffer_dupfree.o `test -f 'lib/malloc/scratch_buffer_dupfree.c' || echo './'`lib/malloc/scratch_buffer_dupfree.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/malloc/libbison_a-scratch_buffer_grow.o `test -f 'lib/malloc/scratch_buffer_grow.c' || echo './'`lib/malloc/scratch_buffer_grow.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/malloc/libbison_a-scratch_buffer_grow_preserve.o `test -f 'lib/malloc/scratch_buffer_grow_preserve.c' || echo './'`lib/malloc/scratch_buffer_grow_preserve.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/malloc/libbison_a-scratch_buffer_set_array_size.o `test -f 'lib/malloc/scratch_buffer_set_array_size.c' || echo './'`lib/malloc/scratch_buffer_set_array_size.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/unistr/libbison_a-u8-mbtoucr.o `test -f 'lib/unistr/u8-mbtoucr.c' || echo './'`lib/unistr/u8-mbtoucr.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/unistr/libbison_a-u8-uctomb.o `test -f 'lib/unistr/u8-uctomb.c' || echo './'`lib/unistr/u8-uctomb.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/unistr/libbison_a-u8-uctomb-aux.o `test -f 'lib/unistr/u8-uctomb-aux.c' || echo './'`lib/unistr/u8-uctomb-aux.c gcc -DEXEEXT=\"\" -I. -I./lib -I. -I./lib -DDEFAULT_TEXT_DOMAIN=\"bison-gnulib\" -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o lib/uniwidth/libbison_a-width.o `test -f 'lib/uniwidth/width.c' || echo './'`lib/uniwidth/width.c rm -f lib/liby.a ar cr lib/liby.a lib/main.o lib/yyerror.o ranlib lib/liby.a rm -f lib/libbison.a ar cr lib/libbison.a lib/libbison_a-allocator.o lib/libbison_a-areadlink.o lib/libbison_a-argmatch.o lib/libbison_a-gl_array_list.o lib/libbison_a-basename-lgpl.o lib/libbison_a-binary-io.o lib/libbison_a-bitrotate.o lib/libbison_a-bitset.o lib/bitset/libbison_a-array.o lib/bitset/libbison_a-stats.o lib/bitset/libbison_a-table.o lib/bitset/libbison_a-list.o lib/bitset/libbison_a-vector.o lib/libbison_a-bitsetv.o lib/libbison_a-c-ctype.o lib/libbison_a-c-strcasecmp.o lib/libbison_a-c-strncasecmp.o lib/libbison_a-canonicalize.o lib/libbison_a-careadlinkat.o lib/libbison_a-cloexec.o lib/libbison_a-close-stream.o lib/libbison_a-closeout.o lib/libbison_a-concat-filename.o lib/libbison_a-dirname.o lib/libbison_a-basename.o lib/libbison_a-dirname-lgpl.o lib/libbison_a-stripslash.o lib/libbison_a-execute.o lib/libbison_a-exitfail.o lib/libbison_a-fatal-signal.o lib/libbison_a-fd-safer-flag.o lib/libbison_a-dup-safer-flag.o lib/libbison_a-file-set.o lib/libbison_a-findprog-in.o lib/libbison_a-fopen-safer.o lib/libbison_a-fstrcmp.o lib/libbison_a-gethrxtime.o lib/libbison_a-xtime.o lib/libbison_a-getprogname.o lib/libbison_a-gettime.o lib/libbison_a-hard-locale.o lib/libbison_a-hash.o lib/libbison_a-gl_hash_map.o lib/libbison_a-hash-pjw.o lib/libbison_a-hash-triple-simple.o lib/libbison_a-ialloc.o lib/libbison_a-integer_length.o lib/libbison_a-integer_length_l.o lib/libbison_a-gl_linked_list.o lib/libbison_a-gl_list.o lib/libbison_a-localcharset.o lib/glthread/libbison_a-lock.o lib/libbison_a-gl_map.o lib/libbison_a-math.o lib/libbison_a-mbchar.o lib/libbison_a-mbfile.o lib/libbison_a-mbswidth.o lib/libbison_a-gl_oset.o lib/libbison_a-pipe2.o lib/libbison_a-pipe2-safer.o lib/libbison_a-printf-frexp.o lib/libbison_a-printf-frexpl.o lib/libbison_a-progname.o lib/libbison_a-quotearg.o lib/libbison_a-gl_rbtree_oset.o lib/libbison_a-gl_rbtreehash_list.o lib/malloc/libbison_a-scratch_buffer_dupfree.o lib/malloc/libbison_a-scratch_buffer_grow.o lib/malloc/libbison_a-scratch_buffer_grow_preserve.o lib/malloc/libbison_a-scratch_buffer_set_array_size.o lib/libbison_a-setlocale_null.o lib/libbison_a-sig-handler.o lib/libbison_a-spawn-pipe.o lib/glthread/libbison_a-threadlib.o lib/libbison_a-timespec.o lib/libbison_a-timevar.o lib/glthread/libbison_a-tls.o lib/libbison_a-unicodeio.o lib/libbison_a-unistd.o lib/libbison_a-dup-safer.o lib/libbison_a-fd-safer.o lib/libbison_a-pipe-safer.o lib/unistr/libbison_a-u8-mbtoucr.o lib/unistr/libbison_a-u8-uctomb.o lib/unistr/libbison_a-u8-uctomb-aux.o lib/uniwidth/libbison_a-width.o lib/libbison_a-wait-process.o lib/libbison_a-wctype-h.o lib/libbison_a-xmalloc.o lib/libbison_a-xalloc-die.o lib/libbison_a-xconcat-filename.o lib/libbison_a-xhash.o lib/libbison_a-gl_xlist.o lib/libbison_a-gl_xmap.o lib/libbison_a-xmemdup0.o lib/libbison_a-xreadlink.o lib/libbison_a-xsize.o lib/libbison_a-xstrndup.o lib/libbison_a-get-errno.o lib/libbison_a-path-join.o lib/libbison_a-asnprintf.o lib/libbison_a-asprintf.o lib/libbison_a-fcntl.o lib/libbison_a-fprintf.o lib/libbison_a-fseterr.o lib/libbison_a-mbrtowc.o lib/libbison_a-obstack.o lib/libbison_a-printf.o lib/libbison_a-printf-args.o lib/libbison_a-printf-parse.o lib/libbison_a-readline.o lib/libbison_a-snprintf.o lib/libbison_a-spawn_faction_addchdir.o lib/libbison_a-sprintf.o lib/libbison_a-vasnprintf.o lib/libbison_a-vasprintf.o lib/libbison_a-vfprintf.o lib/libbison_a-vsnprintf.o lib/libbison_a-vsprintf.o ranlib lib/libbison.a gcc -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -Wl,-z,now -o src/bison src/bison-AnnotationList.o src/bison-InadequacyList.o src/bison-Sbitset.o src/bison-assoc.o src/bison-closure.o src/bison-complain.o src/bison-conflicts.o src/bison-counterexample.o src/bison-derivation.o src/bison-derives.o src/bison-files.o src/bison-fixits.o src/bison-getargs.o src/bison-glyphs.o src/bison-gram.o src/bison-graphviz.o src/bison-ielr.o src/bison-lalr.o src/bison-location.o src/bison-lr0.o src/bison-lssi.o src/bison-main.o src/bison-muscle-tab.o src/bison-named-ref.o src/bison-nullable.o src/bison-output.o src/bison-parse-gram.o src/bison-parse-simulation.o src/bison-print-graph.o src/bison-print-xml.o src/bison-print.o src/bison-reader.o src/bison-reduce.o src/bison-relation.o src/bison-scan-code-c.o src/bison-scan-gram-c.o src/bison-scan-skel-c.o src/bison-state.o src/bison-state-item.o src/bison-strversion.o src/bison-symlist.o src/bison-symtab.o src/bison-tables.o src/bison-uniqstr.o lib/libbison.a /bin/mkdir -p doc LC_ALL=C tests/bison --version >doc/bison.help.tmp LC_ALL=C tests/bison --help | \ sed -e 's,^Usage: .*/bison \[OPTION\],Usage: bison [OPTION],g' \ -e '/translation bugs/d' >>doc/bison.help.tmp ./build-aux/move-if-change doc/bison.help.tmp doc/bison.help if /bin/bash '/build/bison-3.8.2+dfsg/build-aux/missing' help2man --version >/dev/null 2>&1; then \ /bin/bash '/build/bison-3.8.2+dfsg/build-aux/missing' help2man \ --include=./doc/bison.x \ --output=doc/bison.1.tmp tests/bison && \ { sed 's/^\(\.TH[^"]*"[^"]*"[^"]*\)"[^"]*"/\1/' doc/bison.1 >doc/bison.1a.tmp || true; } && \ sed 's/^\(\.TH[^"]*"[^"]*"[^"]*\)"[^"]*"/\1/' doc/bison.1.tmp >doc/bison.1b.tmp && \ if diff doc/bison.1a.tmp doc/bison.1b.tmp >/dev/null 2>&1; then \ touch doc/bison.1; \ else \ mv doc/bison.1.tmp doc/bison.1; \ fi && \ rm -f doc/bison.1*.tmp; \ elif test -d ./.git; then \ echo >&2 "ERROR: doc/bison.1: help2man is needed"; \ exit 1; \ else \ touch doc/bison.1; \ fi /usr/bin/perl -pi.bak -0777 \ -e 's{(^ --.*\n(?: {10}.*\n)*)}' \ -e '{' \ -e ' $def = $1;' \ -e ' $def =~ s/‘|’//g;' \ -e ' $def;' \ -e '}gem;' ./doc/bison.info make[3]: Leaving directory '/build/bison-3.8.2+dfsg' make[2]: Leaving directory '/build/bison-3.8.2+dfsg' make[1]: Leaving directory '/build/bison-3.8.2+dfsg' dh_auto_test make -j3 check "TESTSUITEFLAGS=-j3 --verbose" VERBOSE=1 make[1]: Entering directory '/build/bison-3.8.2+dfsg' if test -d ./.git \ && git --version >/dev/null 2>&1; then \ cd . && \ git submodule --quiet foreach \ 'test "$(git rev-parse "$sha1")" \ = "$(git merge-base origin "$sha1")"' \ || { echo 'maint.mk: found non-public submodule commit' >&2; \ exit 1; }; \ else \ : ; \ fi make check-recursive make[2]: Entering directory '/build/bison-3.8.2+dfsg' Making check in po make[3]: Entering directory '/build/bison-3.8.2+dfsg/po' make[3]: Nothing to be done for 'check'. make[3]: Leaving directory '/build/bison-3.8.2+dfsg/po' Making check in runtime-po make[3]: Entering directory '/build/bison-3.8.2+dfsg/runtime-po' make[3]: Nothing to be done for 'check'. make[3]: Leaving directory '/build/bison-3.8.2+dfsg/runtime-po' Making check in gnulib-po make[3]: Entering directory '/build/bison-3.8.2+dfsg/gnulib-po' make[3]: Nothing to be done for 'check'. make[3]: Leaving directory '/build/bison-3.8.2+dfsg/gnulib-po' Making check in . make[3]: Entering directory '/build/bison-3.8.2+dfsg' /bin/mkdir -p doc LC_ALL=C tests/bison --version >doc/bison.help.tmp LC_ALL=C tests/bison --help | \ sed -e 's,^Usage: .*/bison \[OPTION\],Usage: bison [OPTION],g' \ -e '/translation bugs/d' >>doc/bison.help.tmp ./build-aux/move-if-change doc/bison.help.tmp doc/bison.help make examples/c/calc/calc examples/c/glr/c++-types examples/c/lexcalc/lexcalc examples/c/mfcalc/mfcalc examples/c/pushcalc/calc examples/c/reccalc/reccalc examples/c/rpcalc/rpcalc examples/c++/calc++/calc++ examples/c++/glr/c++-types examples/c++/simple examples/c++/variant examples/c++/variant-11 ./tests/bison tests/atconfig tests/atlocal make[4]: Entering directory '/build/bison-3.8.2+dfsg' /bin/bash ./build-aux/ylwrap examples/c/calc/calc.y y.tab.c examples/c/calc/calc.c y.tab.h `echo examples/c/calc/calc.c | sed -e s/cc$/hh/ -e s/cpp$/hpp/ -e s/cxx$/hxx/ -e s/c++$/h++/ -e s/c$/h/` y.output examples/c/calc/calc.output -- ./tests/bison -o y.tab.c --defines -Werror -Wall,dangling-alias --report=all --no-lines /bin/bash ./build-aux/ylwrap examples/c/glr/c++-types.y y.tab.c examples/c/glr/c++-types.c y.tab.h `echo examples/c/glr/c++-types.c | sed -e s/cc$/hh/ -e s/cpp$/hpp/ -e s/cxx$/hxx/ -e s/c++$/h++/ -e s/c$/h/` y.output examples/c/glr/c++-types.output -- ./tests/bison -o y.tab.c --defines -Werror -Wall,dangling-alias --report=all --no-lines /bin/bash ./build-aux/ylwrap examples/c/lexcalc/parse.y y.tab.c examples/c/lexcalc/parse.c y.tab.h `echo examples/c/lexcalc/parse.c | sed -e s/cc$/hh/ -e s/cpp$/hpp/ -e s/cxx$/hxx/ -e s/c++$/h++/ -e s/c$/h/` y.output examples/c/lexcalc/parse.output -- ./tests/bison -o y.tab.c --defines -Werror -Wall,dangling-alias --report=all --no-lines updating examples/c/calc/calc.output updating examples/c/calc/calc.h \ /bin/bash ./build-aux/ylwrap `test -f 'examples/c/lexcalc/scan.l' || echo './'`examples/c/lexcalc/scan.l lex.yy.c examples/c/lexcalc/scan.c -- flex updating examples/c/glr/c++-types.output updating examples/c/lexcalc/parse.output updating examples/c/glr/c++-types.h /bin/bash ./build-aux/ylwrap examples/c/mfcalc/mfcalc.y y.tab.c examples/c/mfcalc/mfcalc.c y.tab.h `echo examples/c/mfcalc/mfcalc.c | sed -e s/cc$/hh/ -e s/cpp$/hpp/ -e s/cxx$/hxx/ -e s/c++$/h++/ -e s/c$/h/` y.output examples/c/mfcalc/mfcalc.output -- ./tests/bison -o y.tab.c --defines -Werror -Wall,dangling-alias --report=all --no-lines updating examples/c/lexcalc/parse.h /bin/bash ./build-aux/ylwrap examples/c/pushcalc/calc.y y.tab.c examples/c/pushcalc/calc.c y.tab.h `echo examples/c/pushcalc/calc.c | sed -e s/cc$/hh/ -e s/cpp$/hpp/ -e s/cxx$/hxx/ -e s/c++$/h++/ -e s/c$/h/` y.output examples/c/pushcalc/calc.output -- ./tests/bison -o y.tab.c --defines -Werror -Wall,dangling-alias --report=all --no-lines /bin/bash ./build-aux/ylwrap examples/c/reccalc/parse.y y.tab.c examples/c/reccalc/parse.c y.tab.h `echo examples/c/reccalc/parse.c | sed -e s/cc$/hh/ -e s/cpp$/hpp/ -e s/cxx$/hxx/ -e s/c++$/h++/ -e s/c$/h/` y.output examples/c/reccalc/parse.output -- ./tests/bison -o y.tab.c --defines -Werror -Wall,dangling-alias --report=all --no-lines updating examples/c/mfcalc/mfcalc.output updating examples/c/mfcalc/mfcalc.h updating examples/c/reccalc/parse.output /bin/bash ./build-aux/ylwrap examples/c/rpcalc/rpcalc.y y.tab.c examples/c/rpcalc/rpcalc.c y.tab.h `echo examples/c/rpcalc/rpcalc.c | sed -e s/cc$/hh/ -e s/cpp$/hpp/ -e s/cxx$/hxx/ -e s/c++$/h++/ -e s/c$/h/` y.output examples/c/rpcalc/rpcalc.output -- ./tests/bison -o y.tab.c --defines -Werror -Wall,dangling-alias --report=all --no-lines updating examples/c/pushcalc/calc.output updating examples/c/pushcalc/calc.h updating examples/c/reccalc/parse.h rm -f examples/c++/calc++/parser.stamp \ /bin/bash ./build-aux/ylwrap `test -f 'examples/c++/calc++/scanner.ll' || echo './'`examples/c++/calc++/scanner.ll lex.yy.c examples/c++/calc++/scanner.cc -- flex touch examples/c++/calc++/parser.stamp.tmp ./tests/bison -o y.tab.c --defines -Werror -Wall,dangling-alias --report=all --no-lines -o examples/c++/calc++/parser.cc examples/c++/calc++/parser.yy rm -f examples/c++/glr/c++-types.stamp touch examples/c++/glr/c++-types.stamp.tmp ./tests/bison -o y.tab.c --defines -Werror -Wall,dangling-alias --report=all --no-lines -o examples/c++/glr/c++-types.cc examples/c++/glr/c++-types.yy updating examples/c/rpcalc/rpcalc.output mv -f examples/c++/calc++/parser.stamp.tmp examples/c++/calc++/parser.stamp updating examples/c/rpcalc/rpcalc.h \ /bin/bash ./build-aux/ylwrap `test -f 'examples/c++/simple.yy' || echo './'`examples/c++/simple.yy y.tab.c examples/c++/simple.cc y.tab.h `echo examples/c++/simple.cc | sed -e s/cc$/hh/ -e s/cpp$/hpp/ -e s/cxx$/hxx/ -e s/c++$/h++/ -e s/c$/h/` y.output examples/c++/simple.output -- ./tests/bison -o y.tab.c --defines -Werror -Wall,dangling-alias --report=all --no-lines \ /bin/bash ./build-aux/ylwrap `test -f 'examples/c++/variant.yy' || echo './'`examples/c++/variant.yy y.tab.c examples/c++/variant.cc y.tab.h `echo examples/c++/variant.cc | sed -e s/cc$/hh/ -e s/cpp$/hpp/ -e s/cxx$/hxx/ -e s/c++$/h++/ -e s/c$/h/` y.output examples/c++/variant.output -- ./tests/bison -o y.tab.c --defines -Werror -Wall,dangling-alias --report=all --no-lines updating examples/c++/simple.output mv -f examples/c++/glr/c++-types.stamp.tmp examples/c++/glr/c++-types.stamp \ /bin/bash ./build-aux/ylwrap `test -f 'examples/c++/variant-11.yy' || echo './'`examples/c++/variant-11.yy y.tab.c examples/c++/variant-11.cc y.tab.h `echo examples/c++/variant-11.cc | sed -e s/cc$/hh/ -e s/cpp$/hpp/ -e s/cxx$/hxx/ -e s/c++$/h++/ -e s/c$/h/` y.output examples/c++/variant-11.output -- ./tests/bison -o y.tab.c --defines -Werror -Wall,dangling-alias --report=all --no-lines make[4]: 'tests/bison' is up to date. make[4]: Nothing to be done for 'tests/atconfig'. make[4]: 'tests/atlocal' is up to date. updating examples/c++/simple.hh gcc -DEXEEXT=\"\" -I./examples/c/calc -I./examples/c/calc -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o examples/c/calc/examples_c_calc_calc-calc.o `test -f 'examples/c/calc/calc.c' || echo './'`examples/c/calc/calc.c updating examples/c++/variant.output updating examples/c++/variant.hh gcc -DEXEEXT=\"\" -I./examples/c/glr -I./examples/c/glr -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o examples/c/glr/examples_c_glr_c___types-c++-types.o `test -f 'examples/c/glr/c++-types.c' || echo './'`examples/c/glr/c++-types.c updating examples/c++/variant-11.output updating examples/c++/variant-11.hh gcc -DEXEEXT=\"\" -I./examples/c/lexcalc -I./examples/c/lexcalc -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o examples/c/lexcalc/examples_c_lexcalc_lexcalc-parse.o `test -f 'examples/c/lexcalc/parse.c' || echo './'`examples/c/lexcalc/parse.c gcc -DEXEEXT=\"\" -I./examples/c/lexcalc -I./examples/c/lexcalc -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o examples/c/lexcalc/examples_c_lexcalc_lexcalc-scan.o `test -f 'examples/c/lexcalc/scan.c' || echo './'`examples/c/lexcalc/scan.c gcc -DEXEEXT=\"\" -I./examples/c/mfcalc -I./examples/c/mfcalc -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o examples/c/mfcalc/examples_c_mfcalc_mfcalc-mfcalc.o `test -f 'examples/c/mfcalc/mfcalc.c' || echo './'`examples/c/mfcalc/mfcalc.c gcc -DEXEEXT=\"\" -I./examples/c/pushcalc -I./examples/c/pushcalc -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o examples/c/pushcalc/examples_c_pushcalc_calc-calc.o `test -f 'examples/c/pushcalc/calc.c' || echo './'`examples/c/pushcalc/calc.c gcc -DEXEEXT=\"\" -I./examples/c/reccalc -I./examples/c/reccalc -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o examples/c/reccalc/examples_c_reccalc_reccalc-parse.o `test -f 'examples/c/reccalc/parse.c' || echo './'`examples/c/reccalc/parse.c gcc -DEXEEXT=\"\" -I./examples/c/reccalc -I./examples/c/reccalc -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o examples/c/reccalc/examples_c_reccalc_reccalc-scan.o `test -f 'examples/c/reccalc/scan.c' || echo './'`examples/c/reccalc/scan.c gcc -DEXEEXT=\"\" -I./examples/c/rpcalc -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o examples/c/rpcalc/examples_c_rpcalc_rpcalc-rpcalc.o `test -f 'examples/c/rpcalc/rpcalc.c' || echo './'`examples/c/rpcalc/rpcalc.c g++ -DEXEEXT=\"\" -I. -Wdate-time -D_FORTIFY_SOURCE=2 -std=c++11 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o examples/c++/simple-simple.o `test -f 'examples/c++/simple.cc' || echo './'`examples/c++/simple.cc g++ -DEXEEXT=\"\" -I. -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o examples/c++/variant-variant.o `test -f 'examples/c++/variant.cc' || echo './'`examples/c++/variant.cc g++ -DEXEEXT=\"\" -I. -Wdate-time -D_FORTIFY_SOURCE=2 -std=c++11 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o examples/c++/variant_11-variant-11.o `test -f 'examples/c++/variant-11.cc' || echo './'`examples/c++/variant-11.cc In file included from /usr/include/c++/12/vector:70, from ./examples/c++/simple.hh:53, from examples/c++/simple.cc:41: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at ./examples/c++/simple.hh:1040:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:70, from ./examples/c++/variant-11.hh:51, from examples/c++/variant-11.cc:41: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at ./examples/c++/variant-11.hh:1354:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at ./examples/c++/variant-11.hh:1354:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at examples/c++/variant-11.cc:396:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:70, from ./examples/c++/variant.hh:50, from examples/c++/variant.cc:41: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at ./examples/c++/variant.hh:1351:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at ./examples/c++/variant.hh:1351:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at examples/c++/variant.cc:398:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ gcc -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -Wl,-z,now -o examples/c/calc/calc examples/c/calc/examples_c_calc_calc-calc.o gcc -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -Wl,-z,now -o examples/c/glr/c++-types examples/c/glr/examples_c_glr_c___types-c++-types.o gcc -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -Wl,-z,now -o examples/c/lexcalc/lexcalc examples/c/lexcalc/examples_c_lexcalc_lexcalc-parse.o examples/c/lexcalc/examples_c_lexcalc_lexcalc-scan.o gcc -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -Wl,-z,now -o examples/c/mfcalc/mfcalc examples/c/mfcalc/examples_c_mfcalc_mfcalc-mfcalc.o -lm gcc -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -Wl,-z,now -o examples/c/pushcalc/calc examples/c/pushcalc/examples_c_pushcalc_calc-calc.o gcc -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -Wl,-z,now -o examples/c/reccalc/reccalc examples/c/reccalc/examples_c_reccalc_reccalc-parse.o examples/c/reccalc/examples_c_reccalc_reccalc-scan.o gcc -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -Wl,-z,now -o examples/c/rpcalc/rpcalc examples/c/rpcalc/examples_c_rpcalc_rpcalc-rpcalc.o -lm g++ -DEXEEXT=\"\" -I./examples/c++/calc++ -I./examples/c++/calc++ -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o examples/c++/calc++/calc__-driver.o `test -f 'examples/c++/calc++/driver.cc' || echo './'`examples/c++/calc++/driver.cc g++ -DEXEEXT=\"\" -I./examples/c++/calc++ -I./examples/c++/calc++ -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o examples/c++/calc++/calc__-scanner.o `test -f 'examples/c++/calc++/scanner.cc' || echo './'`examples/c++/calc++/scanner.cc g++ -DEXEEXT=\"\" -I./examples/c++/calc++ -I./examples/c++/calc++ -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o examples/c++/calc++/calc__-calc++.o `test -f 'examples/c++/calc++/calc++.cc' || echo './'`examples/c++/calc++/calc++.cc g++ -DEXEEXT=\"\" -I./examples/c++/calc++ -I./examples/c++/calc++ -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o examples/c++/calc++/calc__-parser.o `test -f 'examples/c++/calc++/parser.cc' || echo './'`examples/c++/calc++/parser.cc g++ -DEXEEXT=\"\" -I./examples/c++/glr -I./examples/c++/glr -Wdate-time -D_FORTIFY_SOURCE=2 -std=c++14 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -c -o examples/c++/glr/examples_c___glr_c___types-c++-types.o `test -f 'examples/c++/glr/c++-types.cc' || echo './'`examples/c++/glr/c++-types.cc g++ -std=c++11 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -Wl,-z,now -o examples/c++/simple examples/c++/simple-simple.o g++ -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -Wl,-z,now -o examples/c++/variant examples/c++/variant-variant.o g++ -std=c++11 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -Wl,-z,now -o examples/c++/variant-11 examples/c++/variant_11-variant-11.o In file included from /usr/include/c++/12/vector:70, from examples/c++/calc++/parser.hh:58, from examples/c++/calc++/parser.cc:41: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at examples/c++/calc++/parser.hh:1229:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at examples/c++/calc++/parser.hh:1229:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at examples/c++/calc++/parser.cc:348:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:70, from examples/c++/glr/c++-types.hh:55, from examples/c++/glr/c++-types.cc:66: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at examples/c++/glr/c++-types.cc:1980:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at examples/c++/glr/c++-types.cc:1787:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&, const yy::parser::location_type&)' at examples/c++/glr/c++-types.cc:2794:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ g++ -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -Wl,-z,now -o examples/c++/calc++/calc++ examples/c++/calc++/calc__-driver.o examples/c++/calc++/calc__-scanner.o examples/c++/calc++/calc__-calc++.o examples/c++/calc++/calc__-parser.o In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at examples/c++/glr/c++-types.cc:1980:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at examples/c++/glr/c++-types.cc:1801:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at examples/c++/glr/c++-types.cc:2266:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at examples/c++/glr/c++-types.cc:1980:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at examples/c++/glr/c++-types.cc:1787:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at examples/c++/glr/c++-types.cc:2777:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at examples/c++/glr/c++-types.cc:2765:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ g++ -std=c++14 -g -O2 -ffile-prefix-map=/build/bison-3.8.2+dfsg=. -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro -Wl,-z,now -o examples/c++/glr/c++-types examples/c++/glr/examples_c___glr_c___types-c++-types.o make[4]: Leaving directory '/build/bison-3.8.2+dfsg' make check-TESTS check-local make[4]: Entering directory '/build/bison-3.8.2+dfsg' make all-recursive rm -f tests/package.m4 tests/package.m4.tmp { \ echo '# Signature of the current package.'; \ echo 'm4_define([AT_PACKAGE_NAME], [GNU Bison])'; \ echo 'm4_define([AT_PACKAGE_TARNAME], [bison])'; \ echo 'm4_define([AT_PACKAGE_VERSION], [3.8.2])'; \ echo 'm4_define([AT_PACKAGE_STRING], [GNU Bison 3.8.2])'; \ echo 'm4_define([AT_PACKAGE_BUGREPORT], [bug-bison@gnu.org])'; \ } >tests/package.m4.tmp mv tests/package.m4.tmp tests/package.m4 \ /bin/bash '/build/bison-3.8.2+dfsg/build-aux/missing' autom4te --language=autotest -I ./tests ./tests/testsuite.at -o tests/testsuite.tmp make[5]: Entering directory '/build/bison-3.8.2+dfsg' make[5]: Entering directory '/build/bison-3.8.2+dfsg' Making all in po make[6]: Entering directory '/build/bison-3.8.2+dfsg/po' make[6]: Nothing to be done for 'all'. make[6]: Leaving directory '/build/bison-3.8.2+dfsg/po' Making all in runtime-po make[6]: Entering directory '/build/bison-3.8.2+dfsg/runtime-po' make[6]: Nothing to be done for 'all'. make[6]: Leaving directory '/build/bison-3.8.2+dfsg/runtime-po' Making all in gnulib-po make[6]: Entering directory '/build/bison-3.8.2+dfsg/gnulib-po' make[6]: Nothing to be done for 'all'. make[6]: Leaving directory '/build/bison-3.8.2+dfsg/gnulib-po' Making all in . make[6]: Entering directory '/build/bison-3.8.2+dfsg' PASS: examples/c/mfcalc/mfcalc.test /bin/mkdir -p doc LC_ALL=C tests/bison --version >doc/bison.help.tmp LC_ALL=C tests/bison --help | \ sed -e 's,^Usage: .*/bison \[OPTION\],Usage: bison [OPTION],g' \ -e '/translation bugs/d' >>doc/bison.help.tmp ./build-aux/move-if-change doc/bison.help.tmp doc/bison.help make[6]: Leaving directory '/build/bison-3.8.2+dfsg' make[5]: Leaving directory '/build/bison-3.8.2+dfsg' PASS: examples/c/rpcalc/rpcalc.test PASS: examples/c/glr/c++-types.test PASS: examples/c/calc/calc.test PASS: examples/c/lexcalc/lexcalc.test PASS: examples/c/pushcalc/calc.test PASS: examples/c/reccalc/reccalc.test PASS: examples/c++/calc++/calc++.test PASS: examples/c++/glr/c++-types.test PASS: examples/c++/simple.test PASS: examples/c++/variant.test PASS: examples/c++/variant-11.test ============================================================================ Testsuite summary for GNU Bison 3.8.2 ============================================================================ # TOTAL: 12 # PASS: 12 # SKIP: 0 # XFAIL: 0 # FAIL: 0 # XPASS: 0 # ERROR: 0 ============================================================================ make[5]: Leaving directory '/build/bison-3.8.2+dfsg' "/usr/bin/perl" -pi -e 's/\@tb\@/\t/g' tests/testsuite.tmp mv tests/testsuite.tmp tests/testsuite /bin/bash ./tests/testsuite -C tests -j3 --verbose ## --------------------------- ## ## GNU Bison 3.8.2 test suite. ## ## --------------------------- ## 1. m4.at:21: testing Generating Comments ... ./m4.at:53: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -S ./input.m4 input.y 2. input.at:27: testing Invalid number of arguments ... ./input.at:29: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret 3. input.at:58: testing Invalid options ... ./input.at:67: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -ferror=caret input.y stderr: ./input.at:34: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret 1.y 2.y bison: invalid argument 'error=caret' for '--feature' Valid arguments are: - 'none' - 'caret', 'diagnostics-show-caret' - 'fixit', 'diagnostics-parseable-fixits' - 'syntax-only' - 'all' ./input.at:68: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --report=error=itemsets input.y stderr: ./input.at:42: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --skeleton bison: invalid argument 'error=itemsets' for '--report' Valid arguments are: - 'none' - 'states' - 'itemsets' - 'lookaheads' - 'solved' - 'counterexamples', 'cex' - 'all' ./input.at:72: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror?all input.y ./m4.at:55: cat output.txt stderr: bison: option '--skeleton' requires an argument Try 'bison --help' for more information. ./input.at:43: sed -e \ "s/requires an argument -- skeleton/'--skeleton' requires an argument/" \ stderr stderr: bison: invalid argument 'error?all' for '--warning' Valid arguments are: - 'all' - 'conflicts-rr' - 'conflicts-sr' - 'counterexamples', 'cex' - 'dangling-alias' - 'deprecated' - 'empty-rule' - 'everything' - 'midrule-values' - 'none' - 'other' - 'precedence' - 'yacc' 3. input.at:58: ok 1. m4.at:21: ok 2. input.at:27: ok 5. input.at:147: testing Invalid inputs with {} ... 4. input.at:83: testing Invalid inputs ... ./input.at:97: "$PERL" -pi -e 's/\\(\d{3})/chr(oct($1))/ge' input.y || exit 77 ./input.at:162: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y 6. input.at:173: testing Yacc warnings on symbols ... ./input.at:182: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -Wyacc input.y ./input.at:99: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y 5. input.at:147: ok stderr: input.y:1.11: error: invalid null character 1 | %header "ð€ˆ" | ^ input.y:2.1-2: error: invalid characters: '\0\001\002\377?' 2 | ÿ? | ^~ input.y:3.2: error: invalid null character 3 | "" | ^ input.y:5.1: error: invalid character: '?' 5 | ? | ^ input.y:6.14: error: invalid character: '}' 6 | default: 'a' } | ^ input.y:7.1: error: invalid character: '%' 7 | %& | ^ input.y:7.2: error: invalid character: '&' 7 | %& | ^ input.y:8.1-17: error: invalid directive: '%a-does-not-exist' 8 | %a-does-not-exist | ^~~~~~~~~~~~~~~~~ input.y:9.1: error: invalid character: '%' 9 | %- | ^ input.y:9.2: error: invalid character: '-' 9 | %- | ^ input.y:10.1-11.0: error: missing '%}' at end of file 10 | %{ | ^~ ./input.at:104: "$PERL" -p -e 's{([\0\200\210\360\377])}{sprintf "\\x%02x", ord($1)}ge' stderr 4. input.at:83: ok 7. input.at:204: testing Yacc warnings ... ./input.at:216: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -Wyacc input.y 8. input.at:238: testing Yacc's %type ... ./input.at:253: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -Wyacc input.y ./input.at:182: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wyacc input.y -Werror ./input.at:216: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wyacc input.y -Werror ./input.at:253: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wyacc input.y -Werror stderr: input.y:1.1-6: error: POSIX Yacc does not support %nterm [-Werror=yacc] 1 | %nterm exp | ^~~~~~ input.y:2.12-15: error: POSIX Yacc does not support hexadecimal literals [-Werror=yacc] 2 | %token NUM 0x40 "number" | ^~~~ input.y:2.17-24: error: POSIX Yacc does not support string literals [-Werror=yacc] 2 | %token NUM 0x40 "number" | ^~~~~~~~ input.y:4.6-13: error: POSIX Yacc does not support string literals [-Werror=yacc] 4 | exp: "number"; | ^~~~~~~~ ./input.at:182: sed 's,.*/$,,' stderr 1>&2 ./input.at:182: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wyacc input.y --warnings=error stderr: input.y:1.1-11: error: POSIX Yacc does not support %destructor [-Werror=yacc] 1 | %destructor {} | ^~~~~~~~~~~ input.y:2.1-8: error: POSIX Yacc does not support %printer [-Werror=yacc] 2 | %printer {} | ^~~~~~~~ input.y:6.9-20: error: POSIX Yacc does not support typed midrule actions [-Werror=yacc] 6 | a: { $$ = 42; } { $$ = $1; }; | ^~~~~~~~~~~~ input.y:7.4-9: error: POSIX Yacc does not support %empty [-Werror=yacc] 7 | b: %empty { $$ = 42; }; | ^~~~~~ ./input.at:216: sed 's,.*/$,,' stderr 1>&2 ./input.at:216: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wyacc input.y --warnings=error stderr: input.y:2.1-6: error: POSIX Yacc does not support %nterm [-Werror=yacc] 2 | %nterm nterm1 | ^~~~~~ input.y:3.14-19: error: POSIX yacc reserves %type to nonterminals [-Werror=yacc] 3 | %type TOKEN1 TOKEN2 "TOKEN3" nterm1 nterm2 nterm3 '+' | ^~~~~~ input.y:3.28-35: error: POSIX Yacc does not support string literals [-Werror=yacc] 3 | %type TOKEN1 TOKEN2 "TOKEN3" nterm1 nterm2 nterm3 '+' | ^~~~~~~~ input.y:3.28-35: error: POSIX yacc reserves %type to nonterminals [-Werror=yacc] 3 | %type TOKEN1 TOKEN2 "TOKEN3" nterm1 nterm2 nterm3 '+' | ^~~~~~~~ input.y:3.58-60: error: POSIX yacc reserves %type to nonterminals [-Werror=yacc] 3 | %type TOKEN1 TOKEN2 "TOKEN3" nterm1 nterm2 nterm3 '+' | ^~~ input.y:5.1-6: error: POSIX Yacc does not support %nterm [-Werror=yacc] 5 | %nterm nterm2 | ^~~~~~ input.y:3.21-26: error: POSIX yacc reserves %type to nonterminals [-Werror=yacc] 3 | %type TOKEN1 TOKEN2 "TOKEN3" nterm1 nterm2 nterm3 '+' | ^~~~~~ input.y:10.9-16: error: POSIX Yacc does not support string literals [-Werror=yacc] 10 | nterm3: "TOKEN3" | ^~~~~~~~ ./input.at:182: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wyacc input.y -Wnone,none -Werror --trace=none ./input.at:253: sed 's,.*/$,,' stderr 1>&2 ./input.at:253: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wyacc input.y --warnings=error ./input.at:182: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wyacc input.y --warnings=none -Werror --trace=none ./input.at:216: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wyacc input.y -Wnone,none -Werror --trace=none ./input.at:253: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wyacc input.y -Wnone,none -Werror --trace=none 6. input.at:173: ok ./input.at:253: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wyacc input.y --warnings=none -Werror --trace=none 9. input.at:287: testing Invalid symbol declarations ... ./input.at:304: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y ./input.at:216: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wyacc input.y --warnings=none -Werror --trace=none 9. input.at:287: ok 10. input.at:341: testing Redefining the error token ... ./input.at:354: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y ./input.at:390: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 8. input.at:238: ok 11. input.at:401: testing Dangling aliases ... ./input.at:410: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -Wdangling input.y 7. input.at:204: ok ./input.at:390: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS 12. input.at:427: testing Symbol declarations ... ./input.at:467: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-other -S./dump-symbols.m4 input.y ./input.at:410: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wdangling input.y -Werror stderr: input.y:2.13-17: error: string literal "bar" not attached to a symbol [-Werror=dangling-alias] 2 | %type "bar" | ^~~~~ input.y:4.19-23: error: string literal "baz" not attached to a symbol [-Werror=dangling-alias] 4 | expr: "foo" "bar" "baz" | ^~~~~ ./input.at:468: cat symbols.csv ./input.at:410: sed 's,.*/$,,' stderr 1>&2 12. input.at:427: ok ./input.at:410: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wdangling input.y --warnings=error 13. input.at:528: testing Invalid $n and @n ... ./input.at:536: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y 13. input.at:528: ok 14. input.at:552: testing Type Clashes ... ./input.at:565: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y 14. input.at:552: ok ./input.at:410: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wdangling input.y -Wnone,none -Werror --trace=none 15. input.at:774: testing Unused values ... ./input.at:775: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret input.y ./input.at:410: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wdangling input.y --warnings=none -Werror --trace=none stderr: stdout: ./input.at:391: $PREPARSER ./input stderr: ./input.at:391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 10. input.at:341: ok 11. input.at:401: ok 16. input.at:784: testing Unused values before symbol declarations ... ./input.at:785: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret input.y 17. input.at:794: testing Symbol redeclared ... ./input.at:804: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret input.y ./input.at:775: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Werror ./input.at:804: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Werror stderr: input.y:1.12-14: error: symbol FOO redeclared [-Werror=other] 1 | %token FOO FOO | ^~~ input.y:1.8-10: note: previous declaration 1 | %token FOO FOO | ^~~ input.y:2.15-17: error: symbol BAR redeclared [-Werror=other] 2 | %token BAR 12 BAR 12 | ^~~ input.y:2.8-10: note: previous declaration 2 | %token BAR 12 BAR 12 | ^~~ input.y:3.14-16: error: symbol EOF redeclared [-Werror=other] 3 | %token EOF 0 EOF 0 | ^~~ input.y:3.8-10: note: previous declaration 3 | %token EOF 0 EOF 0 | ^~~ ./input.at:804: sed 's,.*/$,,' stderr 1>&2 ./input.at:804: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=error ./input.at:804: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Wnone,none -Werror --trace=none stderr: input.y:12.10-32: error: unset value: $$ [-Werror=other] 12 | a: INT | INT { } INT { } INT { }; | ^~~~~~~~~~~~~~~~~~~~~~~ input.y:12.10-12: error: unused value: $1 [-Werror=other] 12 | a: INT | INT { } INT { } INT { }; | ^~~ input.y:12.18-20: error: unused value: $3 [-Werror=other] 12 | a: INT | INT { } INT { } INT { }; | ^~~ input.y:12.26-28: error: unused value: $5 [-Werror=other] 12 | a: INT | INT { } INT { } INT { }; | ^~~ input.y:13.10-15: error: empty rule for typed nonterminal, and no action [-Werror=other] 13 | b: INT | %empty; | ^~~~~~ input.y:14.10-62: error: unset value: $$ [-Werror=other] 14 | c: INT | INT { $1; } INT { $2; } INT { $4; }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:14.22-24: error: unused value: $3 [-Werror=other] 14 | c: INT | INT { $1; } INT { $2; } INT { $4; }; | ^~~ input.y:14.43-45: error: unused value: $5 [-Werror=other] 14 | c: INT | INT { $1; } INT { $2; } INT { $4; }; | ^~~ input.y:15.10-49: error: unset value: $$ [-Werror=other] 15 | d: INT | INT { } INT { $1; } INT { $2; }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:15.18-20: error: unused value: $3 [-Werror=other] 15 | d: INT | INT { } INT { $1; } INT { $2; }; | ^~~ input.y:15.30-32: error: unused value: $5 [-Werror=other] 15 | d: INT | INT { } INT { $1; } INT { $2; }; | ^~~ input.y:16.10-37: error: unset value: $$ [-Werror=other] 16 | e: INT | INT { } INT { } INT { $1; }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:16.18-20: error: unused value: $3 [-Werror=other] 16 | e: INT | INT { } INT { } INT { $1; }; | ^~~ input.y:16.27-29: error: unused value: $5 [-Werror=other] 16 | e: INT | INT { } INT { } INT { $1; }; | ^~~ input.y:18.10-58: error: unset value: $$ [-Werror=other] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:18.10-12: error: unused value: $1 [-Werror=other] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~ input.y:18.31-33: error: unused value: $3 [-Werror=other] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~ input.y:18.52-54: error: unused value: $5 [-Werror=other] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~ input.y:19.10-72: error: unset value: $$ [-Werror=other] 19 | h: INT | INT { $$; } INT { $$ = $2; } INT { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:19.10-12: error: unused value: $1 [-Werror=other] 19 | h: INT | INT { $$; } INT { $$ = $2; } INT { }; | ^~~ input.y:19.31-33: error: unused value: $3 [-Werror=other] 19 | h: INT | INT { $$; } INT { $$ = $2; } INT { }; | ^~~ input.y:19.66-68: error: unused value: $5 [-Werror=other] 19 | h: INT | INT { $$; } INT { $$ = $2; } INT { }; | ^~~ input.y:22.10-68: error: unset value: $$ [-Werror=other] 22 | k: INT | INT INT { $$; } { $$ = $3; } { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:22.10-12: error: unused value: $1 [-Werror=other] 22 | k: INT | INT INT { $$; } { $$ = $3; } { }; | ^~~ input.y:22.14-16: error: unused value: $2 [-Werror=other] 22 | k: INT | INT INT { $$; } { $$ = $3; } { }; | ^~~ input.y:25.23-25: error: unset value: $$ [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.40-42: error: unset value: $$ [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.10-50: error: unset value: $$ [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:25.10-12: error: unused value: $1 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.23-25: error: unused value: $2 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.27-29: error: unused value: $3 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.40-42: error: unused value: $4 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.44-46: error: unused value: $5 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:26.23-25: error: unset value: $$ [-Werror=other] 26 | o: INT | INT { } INT { } INT { $$ = $1 + $2 + $3 + $4 + $5; }; | ^~~ input.y:26.40-42: error: unset value: $$ [-Werror=other] 26 | o: INT | INT { } INT { } INT { $$ = $1 + $2 + $3 + $4 + $5; }; | ^~~ ./input.at:775: sed 's,.*/$,,' stderr 1>&2 ./input.at:775: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=error ./input.at:804: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=none -Werror --trace=none ./input.at:785: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Werror 17. input.at:794: ok 18. input.at:832: testing EOF redeclared ... ./input.at:843: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret input.y ./input.at:843: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Werror stderr: input.y:1.16-18: error: symbol FOO redeclared [-Werror=other] 1 | %token FOO BAR FOO 0 | ^~~ input.y:1.8-10: note: previous declaration 1 | %token FOO BAR FOO 0 | ^~~ ./input.at:843: sed 's,.*/$,,' stderr 1>&2 ./input.at:775: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Wnone,none -Werror --trace=none ./input.at:843: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=error stderr: input.y:12.10-32: error: unset value: $$ [-Werror=other] 12 | a: INT | INT { } INT { } INT { }; | ^~~~~~~~~~~~~~~~~~~~~~~ input.y:12.10-12: error: unused value: $1 [-Werror=other] 12 | a: INT | INT { } INT { } INT { }; | ^~~ input.y:12.18-20: error: unused value: $3 [-Werror=other] 12 | a: INT | INT { } INT { } INT { }; | ^~~ input.y:12.26-28: error: unused value: $5 [-Werror=other] 12 | a: INT | INT { } INT { } INT { }; | ^~~ input.y:13.10-15: error: empty rule for typed nonterminal, and no action [-Werror=other] 13 | b: INT | %empty; | ^~~~~~ input.y:14.10-62: error: unset value: $$ [-Werror=other] 14 | c: INT | INT { $1; } INT { $2; } INT { $4; }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:14.22-24: error: unused value: $3 [-Werror=other] 14 | c: INT | INT { $1; } INT { $2; } INT { $4; }; | ^~~ input.y:14.43-45: error: unused value: $5 [-Werror=other] 14 | c: INT | INT { $1; } INT { $2; } INT { $4; }; | ^~~ input.y:15.10-49: error: unset value: $$ [-Werror=other] 15 | d: INT | INT { } INT { $1; } INT { $2; }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:15.18-20: error: unused value: $3 [-Werror=other] 15 | d: INT | INT { } INT { $1; } INT { $2; }; | ^~~ input.y:15.30-32: error: unused value: $5 [-Werror=other] 15 | d: INT | INT { } INT { $1; } INT { $2; }; | ^~~ input.y:16.10-37: error: unset value: $$ [-Werror=other] 16 | e: INT | INT { } INT { } INT { $1; }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:16.18-20: error: unused value: $3 [-Werror=other] 16 | e: INT | INT { } INT { } INT { $1; }; | ^~~ input.y:16.27-29: error: unused value: $5 [-Werror=other] 16 | e: INT | INT { } INT { } INT { $1; }; | ^~~ input.y:18.10-58: error: unset value: $$ [-Werror=other] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:18.10-12: error: unused value: $1 [-Werror=other] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~ input.y:18.31-33: error: unused value: $3 [-Werror=other] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~ input.y:18.52-54: error: unused value: $5 [-Werror=other] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~ input.y:19.10-72: error: unset value: $$ [-Werror=other] 19 | h: INT | INT { $$; } INT { $$ = $2; } INT { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:19.10-12: error: unused value: $1 [-Werror=other] 19 | h: INT | INT { $$; } INT { $$ = $2; } INT { }; | ^~~ input.y:19.31-33: error: unused value: $3 [-Werror=other] 19 | h: INT | INT { $$; } INT { $$ = $2; } INT { }; | ^~~ input.y:19.66-68: error: unused value: $5 [-Werror=other] 19 | h: INT | INT { $$; } INT { $$ = $2; } INT { }; | ^~~ input.y:22.10-68: error: unset value: $$ [-Werror=other] 22 | k: INT | INT INT { $$; } { $$ = $3; } { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:22.10-12: error: unused value: $1 [-Werror=other] 22 | k: INT | INT INT { $$; } { $$ = $3; } { }; | ^~~ input.y:22.14-16: error: unused value: $2 [-Werror=other] 22 | k: INT | INT INT { $$; } { $$ = $3; } { }; | ^~~ input.y:25.23-25: error: unset value: $$ [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.40-42: error: unset value: $$ [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.10-50: error: unset value: $$ [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:25.10-12: error: unused value: $1 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.23-25: error: unused value: $2 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.27-29: error: unused value: $3 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.40-42: error: unused value: $4 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.44-46: error: unused value: $5 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:26.23-25: error: unset value: $$ [-Werror=other] 26 | o: INT | INT { } INT { } INT { $$ = $1 + $2 + $3 + $4 + $5; }; | ^~~ input.y:26.40-42: error: unset value: $$ [-Werror=other] 26 | o: INT | INT { } INT { } INT { $$ = $1 + $2 + $3 + $4 + $5; }; | ^~~ ./input.at:843: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Wnone,none -Werror --trace=none ./input.at:785: sed 's,.*/$,,' stderr 1>&2 ./input.at:785: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=error ./input.at:843: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=none -Werror --trace=none 18. input.at:832: ok 19. input.at:859: testing Symbol class redefinition ... ./input.at:871: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y ./input.at:775: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=none -Werror --trace=none 19. input.at:859: ok 20. input.at:899: testing Default %printer and %destructor redeclared ... ./input.at:959: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y ./input.at:960: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y 20. input.at:899: ok 21. input.at:970: testing Per-type %printer and %destructor redeclared ... ./input.at:785: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Wnone,none -Werror --trace=none ./input.at:987: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y 21. input.at:970: ok 22. input.at:1013: testing Undefined symbols ... ./input.at:1023: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y 22. input.at:1013: ok 23. input.at:1045: testing Unassociated types used for a printer or destructor ... ./input.at:1062: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret input.y ./input.at:776: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --warnings=midrule-values -fcaret input.y ./input.at:1062: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Werror stderr: input.y:4.22-28: error: type is used, but is not associated to any symbol [-Werror=other] input.y:5.25-31: error: type is used, but is not associated to any symbol [-Werror=other] ./input.at:1062: sed 's,.*/$,,' stderr 1>&2 ./input.at:1062: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=error ./input.at:776: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --warnings=midrule-values -fcaret input.y -Werror ./input.at:785: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=none -Werror --trace=none ./input.at:1062: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Wnone,none -Werror --trace=none ./input.at:1062: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=none -Werror --trace=none 23. input.at:1045: ok ./input.at:786: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --warnings=midrule-values -fcaret input.y stderr: 24. input.at:1074: testing Useless printers or destructors ... input.y:12.10-32: error: unset value: $$ [-Werror=other] 12 | a: INT | INT { } INT { } INT { }; | ^~~~~~~~~~~~~~~~~~~~~~~ input.y:12.10-12: error: unused value: $1 [-Werror=other] 12 | a: INT | INT { } INT { } INT { }; | ^~~ input.y:12.18-20: error: unused value: $3 [-Werror=other] 12 | a: INT | INT { } INT { } INT { }; | ^~~ input.y:12.26-28: error: unused value: $5 [-Werror=other] 12 | a: INT | INT { } INT { } INT { }; | ^~~ input.y:13.10-15: error: empty rule for typed nonterminal, and no action [-Werror=other] 13 | b: INT | %empty; | ^~~~~~ input.y:14.14-20: error: unset value: $$ [-Werror=midrule-values] 14 | c: INT | INT { $1; } INT { $2; } INT { $4; }; | ^~~~~~~ input.y:14.26-41: error: unset value: $$ [-Werror=midrule-values] 14 | c: INT | INT { $1; } INT { $2; } INT { $4; }; | ^~~~~~~~~~~~~~~~ input.y:14.10-62: error: unset value: $$ [-Werror=other] 14 | c: INT | INT { $1; } INT { $2; } INT { $4; }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:14.22-24: error: unused value: $3 [-Werror=other] 14 | c: INT | INT { $1; } INT { $2; } INT { $4; }; | ^~~ input.y:14.43-45: error: unused value: $5 [-Werror=other] 14 | c: INT | INT { $1; } INT { $2; } INT { $4; }; | ^~~ input.y:15.14-16: error: unset value: $$ [-Werror=midrule-values] 15 | d: INT | INT { } INT { $1; } INT { $2; }; | ^~~ input.y:15.10-49: error: unset value: $$ [-Werror=other] 15 | d: INT | INT { } INT { $1; } INT { $2; }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:15.18-20: error: unused value: $3 [-Werror=other] 15 | d: INT | INT { } INT { $1; } INT { $2; }; | ^~~ input.y:15.30-32: error: unused value: $5 [-Werror=other] 15 | d: INT | INT { } INT { $1; } INT { $2; }; | ^~~ input.y:16.10-37: error: unset value: $$ [-Werror=other] 16 | e: INT | INT { } INT { } INT { $1; }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:16.18-20: error: unused value: $3 [-Werror=other] 16 | e: INT | INT { } INT { } INT { $1; }; | ^~~ input.y:16.27-29: error: unused value: $5 [-Werror=other] 16 | e: INT | INT { } INT { } INT { $1; }; | ^~~ input.y:18.10-58: error: unset value: $$ [-Werror=other] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:18.10-12: error: unused value: $1 [-Werror=other] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~ input.y:18.14-29: error: unused value: $2 [-Werror=midrule-values] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~~~~~~~~~~~~~~ input.y:18.31-33: error: unused value: $3 [-Werror=other] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~ input.y:18.35-50: error: unused value: $4 [-Werror=midrule-values] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~~~~~~~~~~~~~~ input.y:18.52-54: error: unused value: $5 [-Werror=other] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~ input.y:19.10-72: error: unset value: $$ [-Werror=other] 19 | h: INT | INT { $$; } INT { $$ = $2; } INT { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:19.10-12: error: unused value: $1 [-Werror=other] 19 | h: INT | INT { $$; } INT { $$ = $2; } INT { }; | ^~~ input.y:19.31-33: error: unused value: $3 [-Werror=other] 19 | h: INT | INT { $$; } INT { $$ = $2; } INT { }; | ^~~ input.y:19.35-64: error: unused value: $4 [-Werror=midrule-values] 19 | h: INT | INT { $$; } INT { $$ = $2; } INT { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:19.66-68: error: unused value: $5 [-Werror=other] 19 | h: INT | INT { $$; } INT { $$ = $2; } INT { }; | ^~~ input.y:21.18-37: error: unused value: $3 [-Werror=midrule-values] 21 | j: INT | INT INT { $$ = 1; } { $$ = $1 + $2; }; | ^~~~~~~~~~~~~~~~~~~~ input.y:22.10-68: error: unset value: $$ [-Werror=other] 22 | k: INT | INT INT { $$; } { $$ = $3; } { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:22.10-12: error: unused value: $1 [-Werror=other] 22 | k: INT | INT INT { $$; } { $$ = $3; } { }; | ^~~ input.y:22.14-16: error: unused value: $2 [-Werror=other] 22 | k: INT | INT INT { $$; } { $$ = $3; } { }; | ^~~ input.y:22.35-64: error: unused value: $4 [-Werror=midrule-values] 22 | k: INT | INT INT { $$; } { $$ = $3; } { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:25.23-25: error: unset value: $$ [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.40-42: error: unset value: $$ [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.10-50: error: unset value: $$ [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:25.10-12: error: unused value: $1 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.23-25: error: unused value: $2 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.27-29: error: unused value: $3 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.40-42: error: unused value: $4 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.44-46: error: unused value: $5 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:26.23-25: error: unset value: $$ [-Werror=other] 26 | o: INT | INT { } INT { } INT { $$ = $1 + $2 + $3 + $4 + $5; }; | ^~~ input.y:26.40-42: error: unset value: $$ [-Werror=other] 26 | o: INT | INT { } INT { } INT { $$ = $1 + $2 + $3 + $4 + $5; }; | ^~~ ./input.at:1085: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret input.y ./input.at:776: sed 's,.*/$,,' stderr 1>&2 ./input.at:776: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --warnings=midrule-values -fcaret input.y --warnings=error ./input.at:1085: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Werror stderr: input.y:16.13-19: error: useless %printer for type [-Werror=other] input.y:17.16-22: error: useless %destructor for type [-Werror=other] ./input.at:1085: sed 's,.*/$,,' stderr 1>&2 ./input.at:1085: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=error ./input.at:776: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --warnings=midrule-values -fcaret input.y -Wnone,none -Werror --trace=none ./input.at:786: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --warnings=midrule-values -fcaret input.y -Werror ./input.at:1085: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Wnone,none -Werror --trace=none ./input.at:1085: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=none -Werror --trace=none ./input.at:776: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --warnings=midrule-values -fcaret input.y --warnings=none -Werror --trace=none stderr: input.y:12.10-32: error: unset value: $$ [-Werror=other] 12 | a: INT | INT { } INT { } INT { }; | ^~~~~~~~~~~~~~~~~~~~~~~ input.y:12.10-12: error: unused value: $1 [-Werror=other] 12 | a: INT | INT { } INT { } INT { }; | ^~~ input.y:12.18-20: error: unused value: $3 [-Werror=other] 12 | a: INT | INT { } INT { } INT { }; | ^~~ input.y:12.26-28: error: unused value: $5 [-Werror=other] 12 | a: INT | INT { } INT { } INT { }; | ^~~ input.y:13.10-15: error: empty rule for typed nonterminal, and no action [-Werror=other] 13 | b: INT | %empty; | ^~~~~~ input.y:14.14-20: error: unset value: $$ [-Werror=midrule-values] 14 | c: INT | INT { $1; } INT { $2; } INT { $4; }; | ^~~~~~~ input.y:14.26-41: error: unset value: $$ [-Werror=midrule-values] 14 | c: INT | INT { $1; } INT { $2; } INT { $4; }; | ^~~~~~~~~~~~~~~~ input.y:14.10-62: error: unset value: $$ [-Werror=other] 14 | c: INT | INT { $1; } INT { $2; } INT { $4; }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:14.22-24: error: unused value: $3 [-Werror=other] 14 | c: INT | INT { $1; } INT { $2; } INT { $4; }; | ^~~ input.y:14.43-45: error: unused value: $5 [-Werror=other] 14 | c: INT | INT { $1; } INT { $2; } INT { $4; }; | ^~~ input.y:15.14-16: error: unset value: $$ [-Werror=midrule-values] 15 | d: INT | INT { } INT { $1; } INT { $2; }; | ^~~ input.y:15.10-49: error: unset value: $$ [-Werror=other] 15 | d: INT | INT { } INT { $1; } INT { $2; }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:15.18-20: error: unused value: $3 [-Werror=other] 15 | d: INT | INT { } INT { $1; } INT { $2; }; | ^~~ input.y:15.30-32: error: unused value: $5 [-Werror=other] 15 | d: INT | INT { } INT { $1; } INT { $2; }; | ^~~ input.y:16.10-37: error: unset value: $$ [-Werror=other] 16 | e: INT | INT { } INT { } INT { $1; }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:16.18-20: error: unused value: $3 [-Werror=other] 16 | e: INT | INT { } INT { } INT { $1; }; | ^~~ input.y:16.27-29: error: unused value: $5 [-Werror=other] 16 | e: INT | INT { } INT { } INT { $1; }; | ^~~ input.y:18.10-58: error: unset value: $$ [-Werror=other] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:18.10-12: error: unused value: $1 [-Werror=other] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~ input.y:18.14-29: error: unused value: $2 [-Werror=midrule-values] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~~~~~~~~~~~~~~ input.y:18.31-33: error: unused value: $3 [-Werror=other] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~ input.y:18.35-50: error: unused value: $4 [-Werror=midrule-values] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~~~~~~~~~~~~~~ input.y:18.52-54: error: unused value: $5 [-Werror=other] 18 | g: INT | INT { $$; } INT { $$; } INT { }; | ^~~ input.y:19.10-72: error: unset value: $$ [-Werror=other] 19 | h: INT | INT { $$; } INT { $$ = $2; } INT { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:19.10-12: error: unused value: $1 [-Werror=other] 19 | h: INT | INT { $$; } INT { $$ = $2; } INT { }; | ^~~ input.y:19.31-33: error: unused value: $3 [-Werror=other] 19 | h: INT | INT { $$; } INT { $$ = $2; } INT { }; | ^~~ input.y:19.35-64: error: unused value: $4 [-Werror=midrule-values] 19 | h: INT | INT { $$; } INT { $$ = $2; } INT { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:19.66-68: error: unused value: $5 [-Werror=other] 19 | h: INT | INT { $$; } INT { $$ = $2; } INT { }; | ^~~ input.y:21.18-37: error: unused value: $3 [-Werror=midrule-values] 21 | j: INT | INT INT { $$ = 1; } { $$ = $1 + $2; }; | ^~~~~~~~~~~~~~~~~~~~ input.y:22.10-68: error: unset value: $$ [-Werror=other] 22 | k: INT | INT INT { $$; } { $$ = $3; } { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:22.10-12: error: unused value: $1 [-Werror=other] 22 | k: INT | INT INT { $$; } { $$ = $3; } { }; | ^~~ input.y:22.14-16: error: unused value: $2 [-Werror=other] 22 | k: INT | INT INT { $$; } { $$ = $3; } { }; | ^~~ input.y:22.35-64: error: unused value: $4 [-Werror=midrule-values] 22 | k: INT | INT INT { $$; } { $$ = $3; } { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:25.23-25: error: unset value: $$ [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.40-42: error: unset value: $$ [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.10-50: error: unset value: $$ [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:25.10-12: error: unused value: $1 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.23-25: error: unused value: $2 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.27-29: error: unused value: $3 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.40-42: error: unused value: $4 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:25.44-46: error: unused value: $5 [-Werror=other] 25 | n: INT | INT { } INT { } INT { }; | ^~~ input.y:26.23-25: error: unset value: $$ [-Werror=other] 26 | o: INT | INT { } INT { } INT { $$ = $1 + $2 + $3 + $4 + $5; }; | ^~~ input.y:26.40-42: error: unset value: $$ [-Werror=other] 26 | o: INT | INT { } INT { } INT { $$ = $1 + $2 + $3 + $4 + $5; }; | ^~~ ./input.at:786: sed 's,.*/$,,' stderr 1>&2 ./input.at:786: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --warnings=midrule-values -fcaret input.y --warnings=error ./input.at:1116: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret input.y ./input.at:1116: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Werror ./input.at:786: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --warnings=midrule-values -fcaret input.y -Wnone,none -Werror --trace=none stderr: input.y:3.13-14: error: useless %printer for type <> [-Werror=other] ./input.at:1116: sed 's,.*/$,,' stderr 1>&2 ./input.at:1116: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=error 15. input.at:774: ok 25. input.at:1139: testing Unused values with default %destructor ... ./input.at:1152: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret input.y ./input.at:1116: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Wnone,none -Werror --trace=none ./input.at:1152: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Werror ./input.at:1116: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=none -Werror --trace=none stderr: input.y:6.8-45: error: unset value: $$ [-Werror=other] 6 | start: end end tagged tagged { $1; $3; } ; | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.y:6.12-14: error: unused value: $2 [-Werror=other] 6 | start: end end tagged tagged { $1; $3; } ; | ^~~ input.y:7.6-8: error: unset value: $$ [-Werror=other] 7 | end: { } ; | ^~~ ./input.at:1152: sed 's,.*/$,,' stderr 1>&2 ./input.at:1152: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=error ./input.at:1124: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret input.y ./input.at:1152: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Wnone,none -Werror --trace=none ./input.at:786: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --warnings=midrule-values -fcaret input.y --warnings=none -Werror --trace=none ./input.at:1124: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Werror ./input.at:1152: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=none -Werror --trace=none stderr: input.y:2.16-18: error: useless %printer for type <*> [-Werror=other] ./input.at:1124: sed 's,.*/$,,' stderr 1>&2 ./input.at:1175: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret input.y ./input.at:1124: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=error ./input.at:1124: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Wnone,none -Werror --trace=none ./input.at:1175: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Werror stderr: input.y:6.23-28: error: unused value: $4 [-Werror=other] input.y:8.9-11: error: unset value: $$ [-Werror=other] 16. input.at:784: ok ./input.at:1175: sed 's,.*/$,,' stderr 1>&2 ./input.at:1175: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=error ./input.at:1124: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=none -Werror --trace=none 26. input.at:1187: testing Unused values with per-type %destructor ... ./input.at:1199: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret input.y ./input.at:1175: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Wnone,none -Werror --trace=none ./input.at:1199: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Werror 24. input.at:1074: ok 27. input.at:1219: testing Duplicate string ... ./input.at:1236: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -v -o input.c input.y ./input.at:1175: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=none -Werror --trace=none stderr: input.y:6.8-22: error: unset value: $$ [-Werror=other] 6 | start: end end { $1; } ; | ^~~~~~~~~~~~~~~ input.y:6.12-14: error: unused value: $2 [-Werror=other] 6 | start: end end { $1; } ; | ^~~ input.y:7.6-8: error: unset value: $$ [-Werror=other] 7 | end: { } ; | ^~~ ./input.at:1199: sed 's,.*/$,,' stderr 1>&2 ./input.at:1199: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=error ./input.at:1236: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -v -o input.c input.y -Werror 25. input.at:1139: ok 28. input.at:1247: testing Token collisions ... ./input.at:1256: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y 28. input.at:1247: ok ./input.at:1199: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Wnone,none -Werror --trace=none stderr: input.y:6.11-14: error: symbol "<=" used more than once as a literal string [-Werror=other] ./input.at:1236: sed 's,.*/$,,' stderr 1>&2 ./input.at:1236: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -v -o input.c input.y --warnings=error 29. input.at:1275: testing Incompatible Aliases ... ./input.at:1285: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y ./input.at:1299: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y ./input.at:1313: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y ./input.at:1327: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y ./input.at:1199: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=none -Werror --trace=none ./input.at:1344: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y ./input.at:1359: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y ./input.at:1236: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -v -o input.c input.y -Wnone,none -Werror --trace=none ./input.at:1374: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y 29. input.at:1275: ok 26. input.at:1187: ok 30. input.at:1400: testing Torturing the Scanner ... ./input.at:1407: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y ./input.at:1554: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -v -o input.c input.y 31. input.at:1569: testing Typed symbol aliases ... ./input.at:1586: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./input.at:1236: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -v -o input.c input.y --warnings=none -Werror --trace=none 31. input.at:1569: ok 27. input.at:1219: ok 32. input.at:1609: testing Require 1.0 ... ./input.at:1609: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 33. input.at:1610: testing Require 3.8.2 ... ./input.at:1555: $CC $CFLAGS $CPPFLAGS -c -o input.o input.c ./input.at:1610: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y stderr: 32. input.at:1609: ok stderr: 33. input.at:1610: ok 34. input.at:1612: testing Require 100.0 ... ./input.at:1612: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y stderr: input.y:9.10-16: error: require bison 100.0, but have 3.8.2 34. input.at:1612: ok 35. input.at:1619: testing String aliases for character tokens ... ./input.at:1632: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 36. input.at:1642: testing Symbols ... ./input.at:1666: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --yacc input.y 35. input.at:1619: ok 37. input.at:1708: testing Numbered tokens ... ./input.at:1720: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret redecl.y ./input.at:1666: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --yacc input.y -Werror stderr: stdout: ./input.at:1556: $CC $CFLAGS $CPPFLAGS -c -o main.o main.c ./input.at:1735: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret too-large.y 37. input.at:1708: ok stderr: 38. input.at:1750: testing Unclosed constructs ... ./input.at:1779: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y input.y:1.1-5: error: POSIX Yacc does not support %code [-Werror=yacc] input.y:9.8-16: error: POSIX Yacc forbids dashes in symbol names: WITH-DASH [-Werror=yacc] input.y:10.21-34: error: POSIX Yacc does not support string literals [-Werror=yacc] input.y:12.23-38: error: POSIX Yacc does not support string literals [-Werror=yacc] input.y:13.1-5: error: POSIX Yacc does not support %code [-Werror=yacc] input.y:20.8-16: error: POSIX Yacc forbids dashes in symbol names: with-dash [-Werror=yacc] input.y:22.15-28: error: POSIX Yacc does not support string literals [-Werror=yacc] input.y:24.17-32: error: POSIX Yacc does not support string literals [-Werror=yacc] 38. input.at:1750: ok ./input.at:1666: sed 's,.*/$,,' stderr 1>&2 stderr: stdout: ./input.at:1666: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --yacc input.y --warnings=error ./input.at:1557: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.o main.o $LIBS 39. input.at:1805: testing %start after first rule ... ./input.at:1817: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./input.at:1666: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --yacc input.y -Wnone,none -Werror --trace=none 39. input.at:1805: ok stderr: stdout: ./input.at:1558: $PREPARSER ./input stderr: ./input.at:1558: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 30. input.at:1400: ok 40. input.at:1826: testing Duplicate %start symbol ... ./input.at:1836: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret input.y 41. input.at:1895: testing %prec takes a token ... ./input.at:1905: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y 41. input.at:1895: ok ./input.at:1666: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --yacc input.y --warnings=none -Werror --trace=none 42. input.at:1916: testing %prec's token must be defined ... ./input.at:1925: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret input.y ./input.at:1836: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Werror ./input.at:1925: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Werror stderr: input.y:1.12-14: error: duplicate directive [-Werror=other] 1 | %start exp exp exp | ^~~ input.y:1.8-10: note: previous declaration 1 | %start exp exp exp | ^~~ input.y:1.16-18: error: duplicate directive [-Werror=other] 1 | %start exp exp exp | ^~~ input.y:1.8-10: note: previous declaration 1 | %start exp exp exp | ^~~ input.y: error: fix-its can be applied. Rerun with option '--update'. [-Werror=other] ./input.at:1836: sed 's,.*/$,,' stderr 1>&2 ./input.at:1836: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=error ./input.at:1678: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./input.at:1836: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Wnone,none -Werror --trace=none stderr: input.y:2.8-17: error: token for %prec is not defined: PREC [-Werror=other] ./input.at:1925: sed 's,.*/$,,' stderr 1>&2 ./input.at:1925: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=error ./input.at:1836: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=none -Werror --trace=none ./input.at:1681: $CC $CFLAGS $CPPFLAGS -c -o input.o input.c ./input.at:1925: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Wnone,none -Werror --trace=none ./input.at:1925: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=none -Werror --trace=none ./input.at:1859: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret input.y stderr: stdout: ./input.at:1694: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y ./input.at:1859: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Werror 36. input.at:1642: ok 42. input.at:1916: ok 43. input.at:1936: testing Reject unused %code qualifiers ... ./input.at:1946: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input-c.y 44. input.at:2025: testing Multiple %code ... ./input.at:2054: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y stderr: input.y:1.16-18: error: duplicate directive [-Werror=other] 1 | %start exp foo exp | ^~~ input.y:1.8-10: note: previous declaration 1 | %start exp foo exp | ^~~ input.y: error: fix-its can be applied. Rerun with option '--update'. [-Werror=other] ./input.at:1859: sed 's,.*/$,,' stderr 1>&2 ./input.at:2054: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./input.at:1859: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=error ./input.at:1960: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input-c-glr.y ./input.at:1973: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input-c++.y ./input.at:1859: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Wnone,none -Werror --trace=none ./input.at:1986: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input-c++-glr.y stderr: stdout: ./input.at:2055: $PREPARSER ./input stderr: ./input.at:2055: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./input.at:1859: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=none -Werror --trace=none 44. input.at:2025: ok 45. input.at:2065: testing errors ... ./input.at:2077: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input-redefined.y ./input.at:1877: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret input.y ./input.at:2091: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input-unused.y ./input.at:1999: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret special-char-@@.y ./input.at:1877: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Werror 45. input.at:2065: ok ./input.at:2012: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret special-char-].y 46. input.at:2102: testing %define, --define, --force-define ... ./input.at:2118: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dvar-dd=cmd-d1 -Dvar-dd=cmd-d2 \ -Fvar-ff=cmd-f1 -Fvar-ff=cmd-f2 \ -Dvar-dfg=cmd-d -Fvar-dfg=cmd-f \ -Fvar-fd=cmd-f -Dvar-fd=cmd-d \ --skeleton ./skel.c input.y ./input.at:2123: cat input.tab.c ./input.at:2135: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Dvar=cmd-d input-dg.y ./input.at:2146: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Dvar=cmd-d input-dg.y 43. input.at:1936: ok ./input.at:2158: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Dunused-d -Funused-f input-unused.y 47. input.at:2170: testing "%define" Boolean variables ... ./input.at:2180: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret Input.y stderr: input.y:2.8-10: error: duplicate directive [-Werror=other] 2 | %start exp | ^~~ input.y:1.8-10: note: previous declaration 1 | %start exp foo | ^~~ input.y: error: fix-its can be applied. Rerun with option '--update'. [-Werror=other] ./input.at:1877: sed 's,.*/$,,' stderr 1>&2 ./input.at:1877: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=error 46. input.at:2102: ok 47. input.at:2170: ok 48. input.at:2191: testing "%define" code variables ... ./input.at:2213: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret input.yy 49. input.at:2224: testing "%define" keyword variables ... ./input.at:2246: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret input.y ./input.at:1877: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Wnone,none -Werror --trace=none ./input.at:2246: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Werror ./input.at:1877: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=none -Werror --trace=none ./input.at:2213: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.yy -Werror stderr: input.y:5.1-40: error: %define variable 'lr.type' requires keyword values [-Werror=deprecated] input.y:3.1-40: error: %define variable 'lr.default-reduction' requires keyword values [-Werror=deprecated] input.y:4.1-40: error: %define variable 'lr.keep-unreachable-state' requires keyword values [-Werror=deprecated] input.y:1.1-38: error: %define variable 'api.pure' requires keyword values [-Werror=deprecated] input.y:2.1-40: error: %define variable 'api.push-pull' requires keyword values [-Werror=deprecated] ./input.at:2246: sed 's,.*/$,,' stderr 1>&2 ./input.at:2246: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=error 40. input.at:1826: ok stderr: input.yy:2.1-30: error: %define variable 'api.location.type' requires '{...}' values [-Werror=deprecated] input.yy:4.1-30: error: %define variable 'api.prefix' requires '{...}' values [-Werror=deprecated] input.yy:5.1-30: error: %define variable 'api.token.prefix' requires '{...}' values [-Werror=deprecated] input.yy:3.1-30: error: %define variable 'api.namespace' requires '{...}' values [-Werror=deprecated] ./input.at:2213: sed 's,.*/$,,' stderr 1>&2 50. input.at:2257: testing "%define" enum variables ... ./input.at:2269: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y ./input.at:2213: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.yy --warnings=error ./input.at:2284: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y ./input.at:2246: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Wnone,none -Werror --trace=none ./input.at:2303: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y ./input.at:2213: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.yy -Wnone,none -Werror --trace=none 50. input.at:2257: ok 51. input.at:2320: testing "%define" file variables ... ./input.at:2329: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y ./input.at:2246: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=none -Werror --trace=none ./input.at:2247: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret input.y ./input.at:2213: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.yy --warnings=none -Werror --trace=none 51. input.at:2320: ok 52. input.at:2342: testing "%define" backward compatibility ... ./input.at:2355: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y 52. input.at:2342: ok 53. input.at:2393: testing Unused api.pure ... ./input.at:2413: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y ./input.at:2247: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Werror ./input.at:2214: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret input.yy stderr: ./input.at:2414: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y input.y:5.1-40: error: %define variable 'lr.type' requires keyword values [-Werror=deprecated] input.y:3.1-40: error: %define variable 'lr.default-reduction' requires keyword values [-Werror=deprecated] input.y:4.1-40: error: %define variable 'lr.keep-unreachable-state' requires keyword values [-Werror=deprecated] input.y:1.1-38: error: %define variable 'api.pure' requires keyword values [-Werror=deprecated] input.y:2.1-40: error: %define variable 'api.push-pull' requires keyword values [-Werror=deprecated] ./input.at:2247: sed 's,.*/$,,' stderr 1>&2 ./input.at:2247: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=error ./input.at:2415: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y ./input.at:2247: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Wnone,none -Werror --trace=none ./input.at:2214: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.yy -Werror ./input.at:2416: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y ./input.at:2247: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=none -Werror --trace=none ./input.at:2417: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y stderr: input.yy:2.1-32: error: %define variable 'api.location.type' requires '{...}' values [-Werror=deprecated] input.yy:4.1-32: error: %define variable 'api.prefix' requires '{...}' values [-Werror=deprecated] input.yy:5.1-32: error: %define variable 'api.token.prefix' requires '{...}' values [-Werror=deprecated] input.yy:3.1-32: error: %define variable 'api.namespace' requires '{...}' values [-Werror=deprecated] ./input.at:2214: sed 's,.*/$,,' stderr 1>&2 ./input.at:2214: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.yy --warnings=error ./input.at:2418: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y 49. input.at:2224: ok 54. input.at:2429: testing C++ namespace reference errors ... ./input.at:2450: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y 53. input.at:2393: ok ./input.at:2214: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.yy -Wnone,none -Werror --trace=none 55. input.at:2482: testing Bad character literals ... ./input.at:2484: set x `LC_ALL=C ls -l 'empty.y'` && size=$6 && { test $size -eq 0 || dd obs=1 seek=`expr $size - 1` if=/dev/null of='empty.y'; } || exit 77 stderr: 0+0 records in 0+0 records out 0 bytes copied, 0.000167998 s, 0.0 kB/s stdout: ./input.at:2490: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret empty.y ./input.at:2508: set x `LC_ALL=C ls -l 'two.y'` && size=$6 && { test $size -eq 0 || dd obs=1 seek=`expr $size - 1` if=/dev/null of='two.y'; } || exit 77 stderr: ./input.at:2214: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.yy --warnings=none -Werror --trace=none 0+0 records in 0+0 records out 0 bytes copied, 0.0122262 s, 0.0 kB/s stdout: ./input.at:2452: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y ./input.at:2514: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret two.y ./input.at:2522: set x `LC_ALL=C ls -l 'three.y'` && size=$6 && { test $size -eq 0 || dd obs=1 seek=`expr $size - 1` if=/dev/null of='three.y'; } || exit 77 stderr: 0+0 records in 0+0 records out 0 bytes copied, 0.00025637 s, 0.0 kB/s stdout: ./input.at:2528: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret three.y 55. input.at:2482: ok 56. input.at:2543: testing Bad escapes in literals ... ./input.at:2556: "$PERL" -e 'print "start: \"\\\t\\\f\\\0\\\1\" ;";' >> input.y || exit 77 ./input.at:2558: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y 56. input.at:2543: ok ./input.at:2454: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y 57. input.at:2582: testing Unexpected end of file ... ./input.at:2586: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y ./input.at:2591: set x `LC_ALL=C ls -l 'char.y'` && size=$6 && { test $size -eq 0 || dd obs=1 seek=`expr $size - 1` if=/dev/null of='char.y'; } || exit 77 48. input.at:2191: ok stderr: 0+0 records in 0+0 records out 0 bytes copied, 0.000162164 s, 0.0 kB/s stdout: ./input.at:2594: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret char.y ./input.at:2604: set x `LC_ALL=C ls -l 'escape-in-char.y'` && size=$6 && { test $size -eq 0 || dd obs=1 seek=`expr $size - 1` if=/dev/null of='escape-in-char.y'; } || exit 77 58. input.at:2675: testing LAC: Errors for %define ... ./input.at:2685: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Syacc.c -Dparse.lac=none input.y stderr: ./input.at:2456: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y 0+0 records in 0+0 records out 0 bytes copied, 0.000202704 s, 0.0 kB/s stdout: ./input.at:2607: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret escape-in-char.y ./input.at:2617: set x `LC_ALL=C ls -l 'string.y'` && size=$6 && { test $size -eq 0 || dd obs=1 seek=`expr $size - 1` if=/dev/null of='string.y'; } || exit 77 stderr: 0+0 records in 0+0 records out 0 bytes copied, 0.000213497 s, 0.0 kB/s stdout: ./input.at:2620: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret string.y ./input.at:2630: set x `LC_ALL=C ls -l 'escape-in-string.y'` && size=$6 && { test $size -eq 0 || dd obs=1 seek=`expr $size - 1` if=/dev/null of='escape-in-string.y'; } || exit 77 stderr: 0+0 records in 0+0 records out 0 bytes copied, 0.000223413 s, 0.0 kB/s stdout: ./input.at:2633: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret escape-in-string.y ./input.at:2685: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Syacc.c -Dparse.lac=full input.y ./input.at:2643: set x `LC_ALL=C ls -l 'tstring.y'` && size=$6 && { test $size -eq 0 || dd obs=1 seek=`expr $size - 1` if=/dev/null of='tstring.y'; } || exit 77 stderr: 0+0 records in 0+0 records out 0 bytes copied, 0.000138831 s, 0.0 kB/s stdout: ./input.at:2646: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret tstring.y ./input.at:2656: set x `LC_ALL=C ls -l 'escape-in-tstring.y'` && size=$6 && { test $size -eq 0 || dd obs=1 seek=`expr $size - 1` if=/dev/null of='escape-in-tstring.y'; } || exit 77 stderr: 0+0 records in 0+0 records out 0 bytes copied, 0.000211747 s, 0.0 kB/s stdout: ./input.at:2659: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret escape-in-tstring.y ./input.at:2458: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y 57. input.at:2582: ok 59. input.at:2719: testing -Werror combinations ... ./input.at:2727: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall input.y ./input.at:2685: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Syacc.c -Dparse.lac=unsupported input.y ./input.at:2460: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y ./input.at:2727: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall input.y -Werror ./input.at:2462: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y stderr: input.y:2.15: error: stray '$' [-Werror=other] ./input.at:2727: sed 's,.*/$,,' stderr 1>&2 ./input.at:2727: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall input.y --warnings=error ./input.at:2685: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Slalr1.cc -Dparse.lac=none input.y ./input.at:2465: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y ./input.at:2727: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall input.y -Wnone,none -Werror --trace=none ./input.at:2685: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Slalr1.cc -Dparse.lac=full input.y ./input.at:2467: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y ./input.at:2727: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall input.y --warnings=none -Werror --trace=none ./input.at:2685: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Slalr1.cc -Dparse.lac=unsupported input.y ./input.at:2730: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -W input.y ./input.at:2469: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y ./input.at:2685: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Slalr1.d -Dparse.lac=none input.y ./input.at:2730: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -W input.y -Werror ./input.at:2685: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Slalr1.d -Dparse.lac=full input.y 54. input.at:2429: ok 60. input.at:2764: testing %name-prefix and api.prefix are incompatible ... ./input.at:2779: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wno-deprecated input.y stderr: input.y:2.15: error: stray '$' [-Werror=other] ./input.at:2730: sed 's,.*/$,,' stderr 1>&2 ./input.at:2730: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -W input.y --warnings=error ./input.at:2685: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Slalr1.d -Dparse.lac=unsupported input.y ./input.at:2730: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -W input.y -Wnone,none -Werror --trace=none ./input.at:2780: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Dapi.prefix={foo} -p bar -Wno-deprecated input.y ./input.at:2685: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Slalr1.java -Dparse.lac=none input.y ./input.at:2730: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -W input.y --warnings=none -Werror --trace=none ./input.at:2685: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Slalr1.java -Dparse.lac=full input.y ./input.at:2781: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Dapi.prefix={foo} -Wno-deprecated input.y ./input.at:2733: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-none input.y ./input.at:2685: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Slalr1.java -Dparse.lac=unsupported input.y ./input.at:2782: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -p bar -Wno-deprecated input.y ./input.at:2733: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wno-none input.y -Werror ./input.at:2697: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Dparse.lac.es-capacity-initial=1 -Dparse.lac.memory-trace=full input.y stderr: input.y:2.15: error: stray '$' [-Werror=other] ./input.at:2733: sed 's,.*/$,,' stderr 1>&2 60. input.at:2764: ok ./input.at:2733: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wno-none input.y --warnings=error 61. input.at:2793: testing Redefined %union name ... ./input.at:2808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret input.y ./input.at:2704: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Slalr1.cc -Dparse.lac=full -Dparse.lac.es-capacity-initial=1 -Dparse.lac.memory-trace=full input.y ./input.at:2733: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wno-none input.y -Wnone,none -Werror --trace=none ./input.at:2808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Werror ./input.at:2704: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Slalr1.java -Dparse.lac=full -Dparse.lac.es-capacity-initial=1 -Dparse.lac.memory-trace=full input.y ./input.at:2733: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wno-none input.y --warnings=none -Werror --trace=none stderr: input.y:3.8-10: error: %define variable 'api.value.union.name' redefined [-Werror=other] input.y:1.8-10: note: previous definition input.y:4.1-32: error: %define variable 'api.value.union.name' redefined [-Werror=other] input.y:3.8-10: note: previous definition input.y: error: fix-its can be applied. Rerun with option '--update'. [-Werror=other] ./input.at:2808: sed 's,.*/$,,' stderr 1>&2 58. input.at:2675: ok ./input.at:2808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=error ./input.at:2738: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror,none,other input.y 62. input.at:2840: testing Stray $ or @ ... ./input.at:2861: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall input.y ./input.at:2861: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall input.y -Werror ./input.at:2808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Wnone,none -Werror --trace=none ./input.at:2741: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror,no-all,other input.y ./input.at:2808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=none -Werror --trace=none ./input.at:2746: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Werror -Wno-error=other input.y ./input.at:2750: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-error=other -Werror input.y ./input.at:2820: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y stderr: input.y:11.19: error: stray '$' [-Werror=other] input.y:11.23: error: stray '@' [-Werror=other] input.y:12.19: error: stray '$' [-Werror=other] input.y:12.23: error: stray '@' [-Werror=other] input.y:13.19: error: stray '$' [-Werror=other] input.y:13.23: error: stray '@' [-Werror=other] input.y:16.19: error: stray '$' [-Werror=other] input.y:16.23: error: stray '@' [-Werror=other] input.y:17.19: error: stray '$' [-Werror=other] ./input.at:2861: sed 's,.*/$,,' stderr 1>&2 ./input.at:2861: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall input.y --warnings=error ./input.at:2754: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Werror=other -Wno-other input.y ./input.at:2825: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y ./input.at:2861: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall input.y -Wnone,none -Werror --trace=none 59. input.at:2719: ok 61. input.at:2793: ok 63. input.at:2883: testing Code injection ... ./input.at:2934: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-other -S yacc.c -d input.y 64. input.at:2946: testing Deprecated directives ... ./input.at:3019: cp errors-all experr ./input.at:3020: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -ffixit input.y ./input.at:2861: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall input.y --warnings=none -Werror --trace=none ./input.at:2934: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-other -S glr.c -d input.y ./input.at:3022: sed -e '/^fix-it:/d' errors-all >experr ./input.at:3023: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret input.y 62. input.at:2840: ok ./input.at:2934: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-other -S lalr1.cc -d input.y 65. input.at:3077: testing Unput's effect on locations ... ./input.at:3092: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y 65. input.at:3077: ok ./input.at:3027: rm -f output.c ./input.at:3028: cp input.y input.y.orig ./input.at:3029: sed -e '/fix-it/d' experr ./input.at:3030: echo "bison: file 'input.y' was updated (backup: 'input.y~')" >>experr ./input.at:3031: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --update input.y ./input.at:3034: diff input.y.orig input.y~ ./input.at:3037: test ! -f output.c ./input.at:3040: sed -e '1,8d' input.y ./input.at:3062: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret input.y 66. input.at:3113: testing Non-deprecated directives ... ./input.at:3133: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret input.y ./input.at:2934: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-other -S glr.cc -d input.y 64. input.at:2946: ok ./input.at:3133: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Werror 67. input.at:3148: testing Cannot type action ... ./input.at:3156: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret input.y stderr: input.y:14.1-15.5: error: duplicate directive: '%file-prefix' [-Werror=other] input.y:13.1-18: note: previous declaration input.y: error: %expect-rr applies only to GLR parsers [-Werror=other] input.y: error: fix-its can be applied. Rerun with option '--update'. [-Werror=other] ./input.at:2934: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-other -S glr2.cc -d input.y ./input.at:3156: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Werror ./input.at:3133: sed 's,.*/$,,' stderr 1>&2 ./input.at:3133: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=error ./input.at:2934: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-other -S lalr1.d input.y stderr: input.y:10.6-13: error: only midrule actions can be typed: int [-Werror=other] 10 | exp: {} | ^~~~~~~~ ./input.at:3156: sed 's,.*/$,,' stderr 1>&2 ./input.at:3156: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=error ./input.at:3133: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Wnone,none -Werror --trace=none ./input.at:2934: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-other -S lalr1.java input.y ./input.at:3156: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Wnone,none -Werror --trace=none ./input.at:2935: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-other -S yacc.c -d input.y ./input.at:3133: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=none -Werror --trace=none ./input.at:3156: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=none -Werror --trace=none 66. input.at:3113: ok ./input.at:2935: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-other -S glr.c -d input.y 68. input.at:3171: testing Character literals and api.token.raw ... ./input.at:3181: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y 67. input.at:3148: ok 68. input.at:3171: ok 69. input.at:3205: testing %token-table and parse.error ... ./input.at:3220: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y 70. input.at:3231: testing Invalid file prefix mapping arguments ... ./input.at:3246: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -M foo input.y ./input.at:3247: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --file-prefix-map foo input.y ./input.at:2935: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-other -S lalr1.cc -d input.y ./input.at:3248: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -M foo=bar -M baz input.y ./input.at:3249: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -M foo= -M baz input.y 70. input.at:3231: ok 71. named-refs.at:22: testing Tutorial calculator ... ./input.at:3221: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y ./named-refs.at:184: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./input.at:2935: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-other -S glr.cc -d input.y 69. input.at:3205: ok 72. named-refs.at:196: testing Undefined and ambiguous references ... ./named-refs.at:254: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o test.c test.y 72. named-refs.at:196: ok ./named-refs.at:184: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS ./input.at:2935: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-other -S glr2.cc -d input.y 73. named-refs.at:297: testing Misleading references ... ./named-refs.at:306: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./named-refs.at:306: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o test.c test.y -Werror stderr: test.y:11.22-29: error: misleading reference: '$foo.bar' [-Werror=other] test.y:11.8-10: note: refers to: $foo at $1 test.y:11.12-18: note: possibly meant: $[foo.bar] at $2 ./named-refs.at:306: sed 's,.*/$,,' stderr 1>&2 ./input.at:2935: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-other -S lalr1.d input.y ./named-refs.at:306: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o test.c test.y --warnings=error ./input.at:2935: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-other -S lalr1.java input.y ./named-refs.at:306: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o test.c test.y -Wnone,none -Werror --trace=none ./named-refs.at:306: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o test.c test.y --warnings=none -Werror --trace=none 63. input.at:2883: ok 74. named-refs.at:316: testing Many kinds of errors ... ./named-refs.at:384: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o test.c test.y ./named-refs.at:426: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o test.c test.y 74. named-refs.at:316: ok 73. named-refs.at:297: ok 75. named-refs.at:551: testing Missing identifiers in brackets ... ./named-refs.at:559: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o test.c test.y 75. named-refs.at:551: ok 76. named-refs.at:567: testing Redundant words in brackets ... ./named-refs.at:575: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o test.c test.y 76. named-refs.at:567: ok 77. named-refs.at:583: testing Comments in brackets ... ./named-refs.at:591: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o test.c test.y 77. named-refs.at:583: ok 78. named-refs.at:599: testing Stray symbols in brackets ... ./named-refs.at:607: "$PERL" -pi -e 's/\\(\d{3})/chr(oct($1))/ge' test.y || exit 77 ./named-refs.at:608: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o test.c test.y 79. named-refs.at:618: testing Redundant words in LHS brackets ... ./named-refs.at:625: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o test.c test.y 78. named-refs.at:599: ok 79. named-refs.at:618: ok 81. named-refs.at:648: testing Unresolved references ... ./named-refs.at:676: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o test.c test.y 80. named-refs.at:635: testing Factored LHS ... ./named-refs.at:642: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y 81. named-refs.at:648: ok stderr: stdout: ./named-refs.at:185: $PREPARSER ./test input.txt stderr: ./named-refs.at:185: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 71. named-refs.at:22: ok 82. named-refs.at:715: testing $ or @ followed by . or - ... ./named-refs.at:725: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret test.y 80. named-refs.at:635: ok 83. output.at:68: testing Output files: -dv ... ./named-refs.at:740: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret test.y ./output.at:68: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -dv foo.y 84. output.at:74: testing Output files: -dv >&- ... ./output.at:74: case "$PREBISON" in *valgrind*) exit 77;; esac ./output.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -dv >&- foo.y stderr: ./output.at:68: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 83. output.at:68: ok ./named-refs.at:740: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret test.y -Werror stderr: ./output.at:74: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 84. output.at:74: ok 85. output.at:81: testing Output files: -dv -o foo.c ... ./output.at:81: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -dv -o foo.c foo.y 86. output.at:84: testing Output files: -dv -y ... ./output.at:84: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -dv -y foo.y stderr: ./output.at:81: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 stderr: test.y:4.9: error: stray '$' [-Werror=other] test.y:5.9: error: stray '@' [-Werror=other] ./output.at:81: grep '#include "foo.h"' foo.c stdout: #include "foo.h" 85. output.at:81: ok ./named-refs.at:740: sed 's,.*/$,,' stderr 1>&2 stderr: ./named-refs.at:740: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret test.y --warnings=error ./output.at:84: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 ./output.at:84: grep '#include "y.tab.h"' y.tab.c stdout: 86. output.at:84: ok 87. output.at:87: testing Output files: api.header.include={"./foo.h"} -dv -y ... ./output.at:87: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -dv -y foo.y 88. output.at:92: testing Output files: -dv -o foo.tab.c ... ./output.at:92: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -dv -o foo.tab.c foo.y stderr: foo.y:1.1-7: warning: POSIX Yacc does not support %define [-Wyacc] ./output.at:87: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 ./named-refs.at:740: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret test.y -Wnone,none -Werror --trace=none ./output.at:87: grep '#include "./foo.h"' y.tab.c stdout: #include "./foo.h" 87. output.at:87: ok stderr: ./output.at:92: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 89. output.at:95: testing Output files: --fixed-output-files -dv -g --html ... 89. output.at:95: skipped (output.at:95) 88. output.at:92: ok 90. output.at:97: testing Output files: -Hfoo.header -v -gfoo.gv --html=foo.html ... 90. output.at:97: ./named-refs.at:740: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret test.y --warnings=none -Werror --trace=none 91. output.at:100: testing Output files: -dv -g --xml --fixed-output-files ... skipped (output.at:97) ./output.at:100: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -dv -g --xml --fixed-output-files foo.y 92. output.at:102: testing Output files: -dv -g --xml -y ... ./output.at:102: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -dv -g --xml -y foo.y stderr: ./output.at:102: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 82. named-refs.at:715: ok 92. output.at:102: ok stderr: :6: warning: deprecated option: '--fixed-output-files', use '-o y.tab.c' [-Wdeprecated] ./output.at:100: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 91. output.at:100: ok 93. output.at:104: testing Output files: %require "3.4" -dv -g --xml -y ... ./output.at:104: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -dv -g --xml -y foo.y 94. output.at:107: testing Output files: -dv -g --xml -o y.tab.c ... ./output.at:107: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -dv -g --xml -o y.tab.c foo.y 95. output.at:110: testing Output files: -dv -b bar ... ./output.at:110: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -dv -b bar foo.y stderr: foo.y:1.1-8: warning: POSIX Yacc does not support %require [-Wyacc] foo.y:1.10-14: warning: POSIX Yacc does not support string literals [-Wyacc] ./output.at:104: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 stderr: ./output.at:107: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 93. output.at:104: ok 94. output.at:107: ok stderr: 96. output.at:112: testing Output files: -dv -g -o foo.c ... ./output.at:110: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 ./output.at:112: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -dv -g -o foo.c foo.y 97. output.at:116: testing Output files: %header %verbose ... ./output.at:116: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret foo.y 95. output.at:110: ok 98. output.at:118: testing Output files: %header %verbose %yacc ... ./output.at:118: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret foo.y stderr: stderr: ./output.at:112: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 ./output.at:116: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 stderr: ./output.at:118: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 97. output.at:116: ok 96. output.at:112: ok 98. output.at:118: ok 100. output.at:125: testing Output files: %file-prefix "bar" %header %verbose ... ./output.at:125: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret foo.y 99. output.at:121: testing Output files: %header %verbose %yacc ... ./output.at:121: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret foo.yy 101. output.at:127: testing Output files: %output "bar.c" %header %verbose %yacc ... ./output.at:127: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret foo.y stderr: ./output.at:125: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 100. output.at:125: ok stderr: ./output.at:121: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 99. output.at:121: ok stderr: ./output.at:127: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 102. output.at:129: testing Output files: %file-prefix "baz" %output "bar.c" %header %verbose %yacc ... ./output.at:129: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret foo.y 101. output.at:127: ok 103. output.at:136: testing Output files: %header %verbose ... ./output.at:136: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret foo.yy 104. output.at:139: testing Output files: %header %verbose -o foo.c ... ./output.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o foo.c foo.yy stderr: ./output.at:129: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 102. output.at:129: ok stderr: stderr: ./output.at:139: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 ./output.at:136: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 105. output.at:142: testing Output files: --header=foo.hpp -o foo.c++ ... ./output.at:142: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --header=foo.hpp -o foo.c++ foo.yy 104. output.at:139: ok 103. output.at:136: ok 106. output.at:146: testing Output files: --header=foo.hpp -o foo.c++ ... ./output.at:146: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --header=foo.hpp -o foo.c++ foo.yy 107. output.at:150: testing Output files: %header "foo.hpp" -o foo.c++ ... ./output.at:150: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o foo.c++ foo.yy stderr: ./output.at:142: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 105. output.at:142: ok stderr: ./output.at:146: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 106. output.at:146: ok stderr: ./output.at:150: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 108. output.at:154: testing Output files: -o foo.c++ --graph=foo.gph ... ./output.at:154: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o foo.c++ --graph=foo.gph foo.yy 107. output.at:150: ok 109. output.at:160: testing Output files: %type useless --header --graph --xml --report=all -Wall -Werror ... ./output.at:160: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --header --graph --xml --report=all -Wall -Werror foo.y 110. output.at:167: testing Output files: useless=--header --graph --xml --report=all -Wall -Werror ... ./output.at:167: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --header --graph --xml --report=all -Wall -Werror foo.y stderr: foo.y:1.13-19: error: symbol 'useless' is used, but is not defined as a token and has no rules [-Werror=other] foo.y: error: 1 nonterminal useless in grammar [-Werror=other] foo.y:1.13-19: error: nonterminal useless in grammar: useless [-Werror=other] ./output.at:160: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 109. output.at:160: ok 111. output.at:173: testing Output files: %defines -o foo.c++ ... stderr: ./output.at:154: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 ./output.at:173: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o foo.c++ foo.yy 108. output.at:154: ok 112. output.at:176: testing Output files: %defines "foo.hpp" -o foo.c++ ... stderr: ./output.at:173: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 ./output.at:176: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o foo.c++ foo.yy 111. output.at:173: ok stderr: foo.y:1.1-15: error: %define variable 'useless' is not used ./output.at:167: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.y|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 110. output.at:167: ok stderr: 113. output.at:191: testing Output files: lalr1.cc ... ./output.at:176: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 ./output.at:191: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret foo.yy 114. output.at:194: testing Output files: lalr1.cc %verbose ... 112. output.at:176: ok ./output.at:194: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret foo.yy 115. output.at:197: testing Output files: lalr1.cc %header %verbose ... stderr: ./output.at:197: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret foo.yy ./output.at:194: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 stderr: 114. output.at:194: ok ./output.at:191: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 113. output.at:191: ok 116. output.at:200: testing Output files: lalr1.cc %verbose %locations ... ./output.at:200: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret foo.yy 117. output.at:203: testing Output files: lalr1.cc %header %verbose %locations ... stderr: ./output.at:203: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret foo.yy ./output.at:197: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 115. output.at:197: ok stderr: ./output.at:200: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 118. output.at:206: testing Output files: lalr1.cc %header %verbose ... ./output.at:206: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret subdir/foo.yy 116. output.at:200: ok stderr: ./output.at:203: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 117. output.at:203: ok 119. output.at:210: testing Output files: lalr1.cc %header %verbose %locations -o subdir/foo.cc ... 120. output.at:215: testing Output files: lalr1.cc %header %verbose %file-prefix "output_dir/foo" ... stderr: ./output.at:206: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(subdir/foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 ./output.at:206: grep 'include .subdir/' foo.tab.cc ./output.at:206: grep 'include .subdir/' foo.tab.hh 118. output.at:206: ok ./output.at:210: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o subdir/foo.cc subdir/foo.yy ./output.at:215: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret gram_dir/foo.yy 121. output.at:220: testing Output files: lalr1.cc %header %locations %verbose %file-prefix "output_dir/foo" ... stderr: ./output.at:215: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(gram_dir/foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 120. output.at:215: ok stderr: ./output.at:220: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret gram_dir/foo.yy ./output.at:210: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(subdir/foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 122. output.at:226: testing Output files: lalr1.cc %header %locations api.location.file=none %require "3.2" ... ./output.at:210: grep 'include .subdir/' subdir/foo.cc ./output.at:226: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret foo.yy ./output.at:210: grep 'include .subdir/' subdir/foo.hh 119. output.at:210: ok stderr: ./output.at:220: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(gram_dir/foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 121. output.at:220: ok 123. output.at:231: testing Output files: lalr1.cc %header %locations api.location.file="foo.loc.hh" %require "3.2" ... ./output.at:231: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret foo.yy 124. output.at:237: testing Output files: lalr1.cc %header %locations api.location.file="$at_dir/foo.loc.hh" %require "3.2" ... ./output.at:237: "$PERL" -pi -e 's{\$at_dir}'"{$at_group_dir}g" foo.yy || exit 77 ./output.at:237: rm -f foo.yy.bak ./output.at:237: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret foo.yy stderr: ./output.at:231: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 123. output.at:231: ok stderr: ./output.at:226: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 122. output.at:226: ok 125. output.at:267: testing Conflicting output files: --graph="foo.tab.c" ... ./output.at:267: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --graph="foo.tab.c" foo.y 126. output.at:272: testing Conflicting output files: %header "foo.output" -v ... ./output.at:272: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -v foo.y stderr: ./output.at:237: find . -type f | "$PERL" -ne ' s,\./,,; chomp; push @file, $_ unless m{^(foo.yy|testsuite.log)$}; END { print join (" ", sort @file), "\n" }' || exit 77 124. output.at:237: ok 127. output.at:277: testing Conflicting output files: lalr1.cc %header %locations --graph="location.hh" ... ./output.at:277: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --graph="location.hh" foo.y ./output.at:267: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --graph="foo.tab.c" foo.y -Werror ./output.at:272: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -v foo.y -Werror stderr: foo.y: error: conflicting outputs to file 'foo.tab.c' [-Werror=other] ./output.at:267: sed 's,.*/$,,' stderr 1>&2 ./output.at:277: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --graph="location.hh" foo.y -Werror stderr: foo.y: error: conflicting outputs to file 'foo.output' [-Werror=other] ./output.at:267: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --graph="foo.tab.c" foo.y --warnings=error ./output.at:272: sed 's,.*/$,,' stderr 1>&2 ./output.at:272: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -v foo.y --warnings=error stderr: foo.y: error: conflicting outputs to file 'location.hh' [-Werror=other] ./output.at:277: sed 's,.*/$,,' stderr 1>&2 ./output.at:277: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --graph="location.hh" foo.y --warnings=error ./output.at:267: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --graph="foo.tab.c" foo.y -Wnone,none -Werror --trace=none ./output.at:272: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -v foo.y -Wnone,none -Werror --trace=none ./output.at:272: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -v foo.y --warnings=none -Werror --trace=none ./output.at:267: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --graph="foo.tab.c" foo.y --warnings=none -Werror --trace=none ./output.at:277: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --graph="location.hh" foo.y -Wnone,none -Werror --trace=none ./output.at:267: cat foo.y ./output.at:272: cat foo.y 125. output.at:267: ok 126. output.at:272: ok ./output.at:277: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --graph="location.hh" foo.y --warnings=none -Werror --trace=none 128. output.at:282: testing Conflicting output files: -o foo.y ... ./output.at:282: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o foo.y foo.y 129. output.at:328: testing Output file name: `~!@#$%^&*()-=_+{}[]|\:;<>, .' ... ./output.at:328: touch "\`~!@#\$%^&*()-=_+{}[]|\\:;<>, .'.tmp" || exit 77 ./output.at:328: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o "\`~!@#\$%^&*()-=_+{}[]|\\:;<>, .'.c" --header="\`~!@#\$%^&*()-=_+{}[]|\\:;<>, .'.h" glr.y ./output.at:282: cat foo.y 128. output.at:282: ok ./output.at:277: cat foo.y 127. output.at:277: ok 130. output.at:335: testing Output file name: ( ... ./output.at:335: touch "(.tmp" || exit 77 ./output.at:335: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o "(.c" --header="(.h" glr.y 131. output.at:336: testing Output file name: ) ... ./output.at:336: touch ").tmp" || exit 77 ./output.at:336: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o ").c" --header=").h" glr.y ./output.at:328: ls "\`~!@#\$%^&*()-=_+{}[]|\\:;<>, .'.c" "\`~!@#\$%^&*()-=_+{}[]|\\:;<>, .'.h" stdout: `~!@#$%^&*()-=_+{}[]|\:;<>, .'.c `~!@#$%^&*()-=_+{}[]|\:;<>, .'.h ./output.at:328: $CC $CFLAGS $CPPFLAGS -c -o glr.o -c "\`~!@#\$%^&*()-=_+{}[]|\\:;<>, .'.c" ./output.at:335: ls "(.c" "(.h" stdout: (.c (.h ./output.at:335: $CC $CFLAGS $CPPFLAGS -c -o glr.o -c "(.c" ./output.at:336: ls ").c" ").h" stdout: ).c ).h ./output.at:336: $CC $CFLAGS $CPPFLAGS -c -o glr.o -c ").c" stderr: stdout: ./output.at:335: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o "(.cc" --header="(.hh" cxx.y stderr: stdout: ./output.at:328: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o "\`~!@#\$%^&*()-=_+{}[]|\\:;<>, .'.cc" --header="\`~!@#\$%^&*()-=_+{}[]|\\:;<>, .'.hh" cxx.y ./output.at:335: ls "(.cc" "(.hh" stdout: (.cc (.hh ./output.at:335: $CXX $CPPFLAGS $CXXFLAGS -c -o cxx.o -c "(.cc" ./output.at:328: ls "\`~!@#\$%^&*()-=_+{}[]|\\:;<>, .'.cc" "\`~!@#\$%^&*()-=_+{}[]|\\:;<>, .'.hh" stdout: `~!@#$%^&*()-=_+{}[]|\:;<>, .'.cc `~!@#$%^&*()-=_+{}[]|\:;<>, .'.hh ./output.at:328: $CXX $CPPFLAGS $CXXFLAGS -c -o cxx.o -c "\`~!@#\$%^&*()-=_+{}[]|\\:;<>, .'.cc" stderr: stdout: ./output.at:336: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o ").cc" --header=").hh" cxx.y ./output.at:336: ls ").cc" ").hh" stdout: ).cc ).hh ./output.at:336: $CXX $CPPFLAGS $CXXFLAGS -c -o cxx.o -c ").cc" stderr: stdout: 130. output.at:335: ok 132. output.at:337: testing Output file name: # ... ./output.at:337: touch "#.tmp" || exit 77 ./output.at:337: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o "#.c" --header="#.h" glr.y ./output.at:337: ls "#.c" "#.h" stdout: #.c #.h ./output.at:337: $CC $CFLAGS $CPPFLAGS -c -o glr.o -c "#.c" stderr: stdout: 129. output.at:328: ok stderr: 133. output.at:338: testing Output file name: @@ ... ./output.at:338: touch "@@.tmp" || exit 77 stdout: 131. output.at:336: ok ./output.at:338: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o "@@.c" --header="@@.h" glr.y 134. output.at:339: testing Output file name: @{ ... ./output.at:339: touch "@{.tmp" || exit 77 ./output.at:339: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o "@{.c" --header="@{.h" glr.y ./output.at:338: ls "@@.c" "@@.h" stdout: @@.c @@.h ./output.at:338: $CC $CFLAGS $CPPFLAGS -c -o glr.o -c "@@.c" ./output.at:339: ls "@{.c" "@{.h" stdout: @{.c @{.h ./output.at:339: $CC $CFLAGS $CPPFLAGS -c -o glr.o -c "@{.c" stderr: stdout: ./output.at:337: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o "#.cc" --header="#.hh" cxx.y stderr: stdout: ./output.at:339: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o "@{.cc" --header="@{.hh" cxx.y ./output.at:337: ls "#.cc" "#.hh" stdout: #.cc #.hh ./output.at:337: $CXX $CPPFLAGS $CXXFLAGS -c -o cxx.o -c "#.cc" ./output.at:339: ls "@{.cc" "@{.hh" stdout: @{.cc @{.hh ./output.at:339: $CXX $CPPFLAGS $CXXFLAGS -c -o cxx.o -c "@{.cc" stderr: stdout: ./output.at:338: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o "@@.cc" --header="@@.hh" cxx.y ./output.at:338: ls "@@.cc" "@@.hh" stdout: @@.cc @@.hh ./output.at:338: $CXX $CPPFLAGS $CXXFLAGS -c -o cxx.o -c "@@.cc" stderr: stdout: 132. output.at:337: ok 135. output.at:340: testing Output file name: @} ... ./output.at:340: touch "@}.tmp" || exit 77 ./output.at:340: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o "@}.c" --header="@}.h" glr.y stderr: stdout: 134. output.at:339: ok ./output.at:340: ls "@}.c" "@}.h" stdout: @}.c @}.h ./output.at:340: $CC $CFLAGS $CPPFLAGS -c -o glr.o -c "@}.c" 136. output.at:341: testing Output file name: [ ... ./output.at:341: touch "[.tmp" || exit 77 ./output.at:341: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o "[.c" --header="[.h" glr.y ./output.at:341: ls "[.c" "[.h" stdout: [.c [.h ./output.at:341: $CC $CFLAGS $CPPFLAGS -c -o glr.o -c "[.c" stderr: stdout: 133. output.at:338: ok 137. output.at:342: testing Output file name: ] ... ./output.at:342: touch "].tmp" || exit 77 ./output.at:342: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o "].c" --header="].h" glr.y ./output.at:342: ls "].c" "].h" stdout: ].c ].h ./output.at:342: $CC $CFLAGS $CPPFLAGS -c -o glr.o -c "].c" stderr: stdout: ./output.at:340: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o "@}.cc" --header="@}.hh" cxx.y ./output.at:340: ls "@}.cc" "@}.hh" stdout: @}.cc @}.hh ./output.at:340: $CXX $CPPFLAGS $CXXFLAGS -c -o cxx.o -c "@}.cc" stderr: stdout: ./output.at:341: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o "[.cc" --header="[.hh" cxx.y ./output.at:341: ls "[.cc" "[.hh" stdout: [.cc [.hh ./output.at:341: $CXX $CPPFLAGS $CXXFLAGS -c -o cxx.o -c "[.cc" stderr: stdout: ./output.at:342: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o "].cc" --header="].hh" cxx.y ./output.at:342: ls "].cc" "].hh" stdout: ].cc ].hh ./output.at:342: $CXX $CPPFLAGS $CXXFLAGS -c -o cxx.o -c "].cc" stderr: stdout: 135. output.at:340: ok 138. output.at:363: testing Graph with no conflicts ... ./output.at:363: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall --graph input.y stderr: stdout: 136. output.at:341: ok stderr: ./output.at:363: grep -v // input.gv 138. output.at:363: ok 139. output.at:403: testing Graph with unsolved S/R ... ./output.at:403: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall --graph input.y 140. output.at:473: testing Graph with solved S/R ... ./output.at:473: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall --graph input.y stderr: input.y: warning: 3 shift/reduce conflicts [-Wconflicts-sr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples input.y:10.10-18: warning: rule useless in parser due to conflicts [-Wother] input.y:11.10-18: warning: rule useless in parser due to conflicts [-Wother] input.y:12.10-18: warning: rule useless in parser due to conflicts [-Wother] ./output.at:403: grep -v // input.gv 139. output.at:403: ok stderr: input.y:6.5-7: warning: rule useless in parser due to conflicts [-Wother] input.y:14.10-18: warning: rule useless in parser due to conflicts [-Wother] input.y:15.10-18: warning: rule useless in parser due to conflicts [-Wother] ./output.at:473: grep -v // input.gv 140. output.at:473: ok 141. output.at:538: testing Graph with R/R ... ./output.at:538: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall --graph input.y stderr: input.y: warning: 1 reduce/reduce conflict [-Wconflicts-rr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples input.y:4.3: warning: rule useless in parser due to conflicts [-Wother] ./output.at:538: grep -v // input.gv 141. output.at:538: ok 142. output.at:576: testing Graph with reductions with multiple LAT ... ./output.at:576: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall --graph input.y 143. output.at:641: testing Graph with a reduction rule both enabled and disabled ... ./output.at:641: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall --graph input.y stderr: input.y: warning: 3 reduce/reduce conflicts [-Wconflicts-rr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples input.y:2.14-18: warning: rule useless in parser due to conflicts [-Wother] input.y:5.3: warning: rule useless in parser due to conflicts [-Wother] ./output.at:576: grep -v // input.gv 142. output.at:576: ok 144. output.at:744: testing C++ Output File Prefix Mapping ... ./output.at:775: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -o out/x1.cc -M out/=bar/ x1.yy ./output.at:775: $CXX $CPPFLAGS $CXXFLAGS -Iout/include -c -o out/x1.o out/x1.cc stderr: stdout: 137. output.at:342: ok 145. diagnostics.at:84: testing Warnings ... stderr: input.y: warning: 4 shift/reduce conflicts [-Wconflicts-sr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples ./output.at:641: grep -v // input.gv 143. output.at:641: ok 146. diagnostics.at:133: testing Single point locations ... ./diagnostics.at:84: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret --color=debug -Wall,cex input.y ./diagnostics.at:133: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret --color=debug -Wall,cex input.y ./diagnostics.at:84: "$PERL" -pi -e ' s{()}{ $1 eq "" ? $1 : "" }ge; if (/Example/) { ++$example; $_ = "" if $example % 2 == 0; } ' experr || exit 77 ./diagnostics.at:84: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret -Wall,cex input.y ./diagnostics.at:133: "$PERL" -pi -e ' s{()}{ $1 eq "" ? $1 : "" }ge; if (/Example/) { ++$example; $_ = "" if $example % 2 == 0; } ' experr || exit 77 ./diagnostics.at:133: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret -Wall,cex input.y 145. diagnostics.at:84: ok 146. diagnostics.at:133: ok 147. diagnostics.at:182: testing Line is too short, and then you die ... ./diagnostics.at:182: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret --color=debug -Wall,cex input.y ./diagnostics.at:182: "$PERL" -pi -e ' s{()}{ $1 eq "" ? $1 : "" }ge; if (/Example/) { ++$example; $_ = "" if $example % 2 == 0; } ' experr || exit 77 148. diagnostics.at:217: testing Zero-width characters ... ./diagnostics.at:182: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret -Wall,cex input.y 147. diagnostics.at:182: ok ./diagnostics.at:217: "$PERL" -pi -e 's{\^M}{\r}g;s{\\(\d{3}|.)}{$v = $1; $v =~ /\A\d+\z/ ? chr($v) : $v}ge' input.y experr || exit 77 ./diagnostics.at:217: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret --color=debug -Wall,cex input.y ./diagnostics.at:217: "$PERL" -pi -e ' s{()}{ $1 eq "" ? $1 : "" }ge; if (/Example/) { ++$example; $_ = "" if $example % 2 == 0; } ' experr || exit 77 ./diagnostics.at:217: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret -Wall,cex input.y 149. diagnostics.at:235: testing Tabulations and multibyte characters ... 148. diagnostics.at:217: ok ./diagnostics.at:235: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret --color=debug -Wall,cex input.y 150. diagnostics.at:282: testing Tabulations and multibyte characters ... ./diagnostics.at:282: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret --color=debug -Wall,cex input.y ./diagnostics.at:235: "$PERL" -pi -e ' s{()}{ $1 eq "" ? $1 : "" }ge; if (/Example/) { ++$example; $_ = "" if $example % 2 == 0; } ' experr || exit 77 ./diagnostics.at:235: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret -Wall,cex input.y ./diagnostics.at:282: "$PERL" -pi -e ' s{()}{ $1 eq "" ? $1 : "" }ge; if (/Example/) { ++$example; $_ = "" if $example % 2 == 0; } ' experr || exit 77 ./diagnostics.at:282: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret -Wall,cex input.y 149. diagnostics.at:235: ok 150. diagnostics.at:282: ok 151. diagnostics.at:303: testing Special files ... 152. diagnostics.at:328: testing Complaints from M4 ... ./diagnostics.at:303: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret --color=debug -Wall,cex input.y ./diagnostics.at:328: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret --color=debug -Wall,cex input.y ./diagnostics.at:303: "$PERL" -pi -e ' s{()}{ $1 eq "" ? $1 : "" }ge; if (/Example/) { ++$example; $_ = "" if $example % 2 == 0; } ' experr || exit 77 ./diagnostics.at:303: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret -Wall,cex input.y ./diagnostics.at:328: "$PERL" -pi -e ' s{()}{ $1 eq "" ? $1 : "" }ge; if (/Example/) { ++$example; $_ = "" if $example % 2 == 0; } ' experr || exit 77 ./diagnostics.at:328: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret -Wall,cex input.y 151. diagnostics.at:303: ok 152. diagnostics.at:328: ok 153. diagnostics.at:351: testing Carriage return ... ./diagnostics.at:351: "$PERL" -pi -e 's{\^M}{\r}g;s{\\(\d{3}|.)}{$v = $1; $v =~ /\A\d+\z/ ? chr($v) : $v}ge' input.y experr || exit 77 ./diagnostics.at:351: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret --color=debug -Wall,cex input.y 154. diagnostics.at:372: testing CR NL ... ./diagnostics.at:351: "$PERL" -pi -e ' s{()}{ $1 eq "" ? $1 : "" }ge; if (/Example/) { ++$example; $_ = "" if $example % 2 == 0; } ' experr || exit 77 ./diagnostics.at:351: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret -Wall,cex input.y ./diagnostics.at:372: "$PERL" -pi -e 's{\^M}{\r}g;s{\\(\d{3}|.)}{$v = $1; $v =~ /\A\d+\z/ ? chr($v) : $v}ge' input.y experr || exit 77 ./diagnostics.at:372: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret --color=debug -Wall,cex input.y 153. diagnostics.at:351: ok 155. diagnostics.at:399: testing Screen width: 200 columns ... ./diagnostics.at:399: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" COLUMNS=200 bison -fcaret --color=debug -Wall,cex input.y ./diagnostics.at:372: "$PERL" -pi -e ' s{()}{ $1 eq "" ? $1 : "" }ge; if (/Example/) { ++$example; $_ = "" if $example % 2 == 0; } ' experr || exit 77 ./diagnostics.at:372: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret -Wall,cex input.y ./diagnostics.at:399: "$PERL" -pi -e ' s{()}{ $1 eq "" ? $1 : "" }ge; if (/Example/) { ++$example; $_ = "" if $example % 2 == 0; } ' experr || exit 77 ./diagnostics.at:399: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" COLUMNS=200 bison -fcaret -Wall,cex input.y 154. diagnostics.at:372: ok 156. diagnostics.at:432: testing Screen width: 80 columns ... ./diagnostics.at:432: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" COLUMNS=80 bison -fcaret --color=debug -Wall,cex input.y 155. diagnostics.at:399: ok 157. diagnostics.at:465: testing Screen width: 60 columns ... ./diagnostics.at:465: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" COLUMNS=60 bison -fcaret --color=debug -Wall,cex input.y ./diagnostics.at:432: "$PERL" -pi -e ' s{()}{ $1 eq "" ? $1 : "" }ge; if (/Example/) { ++$example; $_ = "" if $example % 2 == 0; } ' experr || exit 77 ./diagnostics.at:432: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" COLUMNS=80 bison -fcaret -Wall,cex input.y ./diagnostics.at:465: "$PERL" -pi -e ' s{()}{ $1 eq "" ? $1 : "" }ge; if (/Example/) { ++$example; $_ = "" if $example % 2 == 0; } ' experr || exit 77 ./diagnostics.at:465: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" COLUMNS=60 bison -fcaret -Wall,cex input.y 156. diagnostics.at:432: ok 157. diagnostics.at:465: ok 158. diagnostics.at:504: testing Suggestions ... 159. diagnostics.at:527: testing Counterexamples ... ./diagnostics.at:504: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret --color=debug -Wall,cex input.y ./diagnostics.at:504: "$PERL" -pi -e ' s{()}{ $1 eq "" ? $1 : "" }ge; if (/Example/) { ++$example; $_ = "" if $example % 2 == 0; } ' experr || exit 77 ./diagnostics.at:527: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret --color=debug -Wall,cex input.y ./diagnostics.at:504: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret -Wall,cex input.y 158. diagnostics.at:504: ok 160. diagnostics.at:645: testing Deep Counterexamples ... ./diagnostics.at:645: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret --color=debug -Wall,cex input.y ./diagnostics.at:527: "$PERL" -pi -e ' s{()}{ $1 eq "" ? $1 : "" }ge; if (/Example/) { ++$example; $_ = "" if $example % 2 == 0; } ' experr || exit 77 ./diagnostics.at:527: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret -Wall,cex input.y ./diagnostics.at:645: "$PERL" -pi -e ' s{()}{ $1 eq "" ? $1 : "" }ge; if (/Example/) { ++$example; $_ = "" if $example % 2 == 0; } ' experr || exit 77 ./diagnostics.at:645: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; LC_ALL="$locale" bison -fcaret -Wall,cex input.y 160. diagnostics.at:645: ok 159. diagnostics.at:527: ok 161. diagnostics.at:713: testing Indentation with message suppression ... ./diagnostics.at:725: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -Wno-other input.y 162. skeletons.at:25: testing Relative skeleton file names ... ./skeletons.at:27: mkdir tmp ./skeletons.at:63: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret tmp/input-gram.y ./skeletons.at:64: cat input-gram.tab.c ./diagnostics.at:725: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wno-other input.y -Werror ./skeletons.at:68: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret input-gram.y ./skeletons.at:69: cat input-gram.tab.c ./skeletons.at:73: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --skeleton=tmp/skel.c tmp/input-cmd-line.y ./skeletons.at:74: cat input-cmd-line.tab.c 162. skeletons.at:25: ok stderr: input.y:2.1-12: error: deprecated directive: '%pure-parser', use '%define api.pure' [-Werror=deprecated] 2 | %pure-parser | ^~~~~~~~~~~~ | %define api.pure input.y:3.1-14: error: deprecated directive: '%error-verbose', use '%define parse.error verbose' [-Werror=deprecated] 3 | %error-verbose | ^~~~~~~~~~~~~~ | %define parse.error verbose ./diagnostics.at:725: sed 's,.*/$,,' stderr 1>&2 ./diagnostics.at:725: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wno-other input.y --warnings=error 163. skeletons.at:85: testing Installed skeleton file names ... ./skeletons.at:120: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --skeleton=yacc.c -o input-cmd-line.c input-cmd-line.y ./skeletons.at:121: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input-cmd-line input-cmd-line.c $LIBS ./diagnostics.at:725: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wno-other input.y -Wnone,none -Werror --trace=none ./diagnostics.at:725: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wno-other input.y --warnings=none -Werror --trace=none 161. diagnostics.at:713: ok 164. skeletons.at:142: testing Boolean=variables: invalid skeleton defaults ... ./skeletons.at:155: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y 164. skeletons.at:142: ok 165. skeletons.at:166: testing Complaining during macro argument expansion ... ./skeletons.at:189: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input1.y ./skeletons.at:209: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input2.y ./skeletons.at:223: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input3.y ./skeletons.at:237: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input4.y 165. skeletons.at:166: ok 166. skeletons.at:248: testing Fatal errors make M4 exit immediately ... ./skeletons.at:262: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input1.y stderr: stdout: ./output.at:782: sed -ne 's/#line [0-9][0-9]* "/#line "/p;/INCLUDED/p;/\\file/{p;n;p;}' out/include/ast/loc.hh ./output.at:794: sed -ne 's/^#line [0-9][0-9]* "/#line "/p;/INCLUDED/p;/\\file/{p;n;p;}' out/x1.hh ./output.at:806: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -o out/x2.cc -M out/=bar/ x2.yy ./skeletons.at:279: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input2.y stderr: stdout: ./skeletons.at:122: $PREPARSER ./input-cmd-line stderr: syntax error, unexpected 'a', expecting end of file ./skeletons.at:122: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 166. skeletons.at:248: ok ./skeletons.at:126: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input-gram.c input-gram.y ./output.at:806: $CXX $CPPFLAGS $CXXFLAGS -Iout/include -c -o out/x2.o out/x2.cc 167. skeletons.at:302: testing Fatal errors but M4 continues producing output ... ./skeletons.at:314: "$PERL" gen-skel.pl > skel.c || exit 77 ./skeletons.at:322: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y ./skeletons.at:127: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input-gram input-gram.c $LIBS 167. skeletons.at:302: ok 168. sets.at:27: testing Nullable ... ./sets.at:42: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --trace=sets input.y stderr: bison (GNU Bison) 3.8.2 RITEM 0: e $end (rule 0) 3: 'e' (rule 1) 5: (rule 2) DERIVES $accept derives 0 e $end e derives 1 'e' 2 %empty NULLABLE $accept: no e: yes RTC: Firsts Input BEGIN 01 .--. 0| 1| 1| | `--' RTC: Firsts Input END RTC: Firsts Output BEGIN 01 .--. 0|11| 1| 1| `--' RTC: Firsts Output END FIRSTS $accept firsts $accept e e firsts e FDERIVES $accept derives 0 e $end 1 'e' 2 %empty e derives 1 'e' 2 %empty relation_transpose: relation_transpose: output: follows after includes: FOLLOWS[goto[0] = (0, e, 2)] = $end Lookaheads: State 0: rule 2: $end State 1: rule 1: State 3: rule 0: ./sets.at:43: sed -f extract.sed stderr 168. sets.at:27: ok 169. sets.at:111: testing Broken Closure ... ./sets.at:125: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --trace=sets input.y stderr: bison (GNU Bison) 3.8.2 RITEM 0: a $end (rule 0) 3: b (rule 1) 5: c (rule 2) 7: d (rule 3) 9: e (rule 4) 11: f (rule 5) 13: g (rule 6) 15: h (rule 7) 17: 'h' (rule 8) DERIVES $accept derives 0 a $end a derives 1 b b derives 2 c c derives 3 d d derives 4 e e derives 5 f f derives 6 g g derives 7 h h derives 8 'h' NULLABLE $accept: no a: no b: no c: no d: no e: no f: no g: no h: no RTC: Firsts Input BEGIN 012345678 .---------. 0| 1 | 1| 1 | 2| 1 | 3| 1 | 4| 1 | 5| 1 | 6| 1 | 7| 1| 8| | `---------' RTC: Firsts Input END RTC: Firsts Output BEGIN 012345678 .---------. 0|111111111| 1| 11111111| 2| 1111111| 3| 111111| 4| 11111| 5| 1111| 6| 111| 7| 11| 8| 1| `---------' RTC: Firsts Output END FIRSTS $accept firsts $accept a b c d e f g h a firsts a b c d e f g h b firsts b c d e f g h c firsts c d e f g h d firsts d e f g h e firsts e f g h f firsts f g h g firsts g h h firsts h FDERIVES $accept derives 0 a $end 1 b 2 c 3 d 4 e 5 f 6 g 7 h 8 'h' a derives 1 b 2 c 3 d 4 e 5 f 6 g 7 h 8 'h' b derives 2 c 3 d 4 e 5 f 6 g 7 h 8 'h' c derives 3 d 4 e 5 f 6 g 7 h 8 'h' d derives 4 e 5 f 6 g 7 h 8 'h' e derives 5 f 6 g 7 h 8 'h' f derives 6 g 7 h 8 'h' g derives 7 h 8 'h' h derives 8 'h' relation_transpose: 0: 1 1: 2 2: 3 3: 4 4: 5 5: 6 6: 7 relation_transpose: output: 1: 0 2: 1 3: 2 4: 3 5: 4 6: 5 7: 6 follows after includes: FOLLOWS[goto[0] = (0, a, 2)] = $end FOLLOWS[goto[1] = (0, b, 3)] = $end FOLLOWS[goto[2] = (0, c, 4)] = $end FOLLOWS[goto[3] = (0, d, 5)] = $end FOLLOWS[goto[4] = (0, e, 6)] = $end FOLLOWS[goto[5] = (0, f, 7)] = $end FOLLOWS[goto[6] = (0, g, 8)] = $end FOLLOWS[goto[7] = (0, h, 9)] = $end Lookaheads: State 1: rule 8: State 3: rule 1: State 4: rule 2: State 5: rule 3: State 6: rule 4: State 7: rule 5: State 8: rule 6: State 9: rule 7: State 10: rule 0: ./sets.at:127: sed -n 's/[ ]*$//;/^RTC: Firsts Output BEGIN/,/^RTC: Firsts Output END/p' stderr 169. sets.at:111: ok 170. sets.at:153: testing Firsts ... ./sets.at:171: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --trace=sets input.y stderr: bison (GNU Bison) 3.8.2 RITEM 0: exp $end (rule 0) 3: exp '<' exp (rule 1) 7: exp '>' exp (rule 2) 11: exp '+' exp (rule 3) 15: exp '-' exp (rule 4) 19: exp '^' exp (rule 5) 23: exp '=' exp (rule 6) 27: "exp" (rule 7) DERIVES $accept derives 0 exp $end exp derives 1 exp '<' exp 2 exp '>' exp 3 exp '+' exp 4 exp '-' exp 5 exp '^' exp 6 exp '=' exp 7 "exp" NULLABLE $accept: no exp: no RTC: Firsts Input BEGIN 01 .--. 0| 1| 1| 1| `--' RTC: Firsts Input END RTC: Firsts Output BEGIN 01 .--. 0|11| 1| 1| `--' RTC: Firsts Output END FIRSTS $accept firsts $accept exp exp firsts exp FDERIVES $accept derives 0 exp $end 1 exp '<' exp 2 exp '>' exp 3 exp '+' exp 4 exp '-' exp 5 exp '^' exp 6 exp '=' exp 7 "exp" exp derives 1 exp '<' exp 2 exp '>' exp 3 exp '+' exp 4 exp '-' exp 5 exp '^' exp 6 exp '=' exp 7 "exp" relation_transpose: 0: 1 2 3 4 5 6 1: 1 2 3 4 5 6 2: 1 2 3 4 5 6 3: 1 2 3 4 5 6 4: 1 2 3 4 5 6 5: 1 2 3 4 5 6 6: 1 2 3 4 5 6 relation_transpose: output: 1: 0 1 2 3 4 5 6 2: 0 1 2 3 4 5 6 3: 0 1 2 3 4 5 6 4: 0 1 2 3 4 5 6 5: 0 1 2 3 4 5 6 6: 0 1 2 3 4 5 6 follows after includes: FOLLOWS[goto[0] = (0, exp, 2)] = $end '<' '>' '+' '-' '^' '=' FOLLOWS[goto[1] = (4, exp, 10)] = $end '<' '>' '+' '-' '^' '=' FOLLOWS[goto[2] = (5, exp, 11)] = $end '<' '>' '+' '-' '^' '=' FOLLOWS[goto[3] = (6, exp, 12)] = $end '<' '>' '+' '-' '^' '=' FOLLOWS[goto[4] = (7, exp, 13)] = $end '<' '>' '+' '-' '^' '=' FOLLOWS[goto[5] = (8, exp, 14)] = $end '<' '>' '+' '-' '^' '=' FOLLOWS[goto[6] = (9, exp, 15)] = $end '<' '>' '+' '-' '^' '=' Lookaheads: State 1: rule 7: State 3: rule 0: State 10: rule 1: $end '<' '>' '+' '-' '^' '=' State 11: rule 2: $end '<' '>' '+' '-' '^' '=' State 12: rule 3: $end '<' '>' '+' '-' '^' '=' State 13: rule 4: $end '<' '>' '+' '-' '^' '=' State 14: rule 5: $end '<' '>' '+' '-' '^' '=' State 15: rule 6: $end '<' '>' '+' '-' '^' '=' ./sets.at:172: sed -f extract.sed stderr 170. sets.at:153: ok 171. sets.at:228: testing Accept ... ./sets.at:240: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -v -o input.c input.y stderr: stdout: ./skeletons.at:128: $PREPARSER ./input-gram stderr: syntax error, unexpected 'a', expecting end of file ./skeletons.at:128: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 163. skeletons.at:85: ok ./sets.at:243: sed -n 's/.*define YYFINAL *\([0-9][0-9]*\)/final state \1/p' input.c stdout: final state 6 ./sets.at:248: sed -n ' /^State \(.*\)/{ s//final state \1/ x } / accept/{ x p q } ' input.output 172. sets.at:269: testing Build relations ... ./sets.at:286: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret input.y 171. sets.at:228: ok 173. sets.at:315: testing Reduced Grammar ... ./sets.at:325: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --trace=grammar -o input.c input.y ./sets.at:286: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Werror ./sets.at:325: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --trace=grammar -o input.c input.y -Werror stderr: input.y: error: 5 reduce/reduce conflicts [-Werror=conflicts-rr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples input.y:2.14-17: error: rule useless in parser due to conflicts [-Werror=other] 2 | expr: term | term | term | term | term | term | ^~~~ input.y:2.21-24: error: rule useless in parser due to conflicts [-Werror=other] 2 | expr: term | term | term | term | term | term | ^~~~ input.y:2.28-31: error: rule useless in parser due to conflicts [-Werror=other] 2 | expr: term | term | term | term | term | term | ^~~~ input.y:2.35-38: error: rule useless in parser due to conflicts [-Werror=other] 2 | expr: term | term | term | term | term | term | ^~~~ input.y:2.42-45: error: rule useless in parser due to conflicts [-Werror=other] 2 | expr: term | term | term | term | term | term | ^~~~ ./sets.at:286: sed 's,.*/$,,' stderr 1>&2 ./sets.at:286: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=error stderr: bison (GNU Bison) 3.8.2 input.y: error: 1 nonterminal useless in grammar [-Werror=other] input.y: error: 1 rule useless in grammar [-Werror=other] input.y:4.1-7: error: nonterminal useless in grammar: useless [-Werror=other] Reduced Grammar ntokens = 7, nnterms = 4, nsyms = 11, nrules = 6, nritems = 17 Tokens ------ Value Sprec Sassoc Tag 0 0 0 $end 1 0 0 error 2 0 0 $undefined 3 0 0 "+" 4 0 0 "*" 5 0 0 "useless" 6 0 0 "num" Nonterminals ------------ Value Tag 7 $accept 8 expr 9 term 10 fact Rules ----- Num (Prec, Assoc, Useful, UselessChain) Lhs -> (Ritem Range) Rhs 0 ( 0, 0, t, f) 7 -> ( 0- 1) 8 0 1 ( 0, 0, t, f) 8 -> ( 3- 5) 8 3 9 2 ( 0, 0, t, t) 8 -> ( 7- 7) 9 3 ( 0, 0, t, f) 9 -> ( 9-11) 9 4 10 4 ( 0, 0, t, t) 9 -> (13-13) 10 5 ( 0, 0, t, t) 10 -> (17-17) 6 6 ( 0, 0, f, t) 11 -> (15-15) 5 Rules interpreted ----------------- 0 $accept: expr $end 1 expr: expr "+" term 2 expr: term 3 term: term "*" fact 4 term: fact 5 fact: "num" 6 useless: "useless" reduced input.y defines 7 terminals, 4 nonterminals, and 6 productions. ./sets.at:325: sed 's,.*/$,,' stderr 1>&2 ./sets.at:325: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --trace=grammar -o input.c input.y --warnings=error ./sets.at:286: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Wnone,none -Werror --trace=none ./sets.at:286: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=none -Werror --trace=none ./sets.at:325: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --trace=grammar -o input.c input.y -Wnone,none -Werror --trace=none 172. sets.at:269: ok 174. sets.at:394: testing Reduced Grammar with prec and assoc ... ./sets.at:412: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --trace=grammar -o input.c input.y ./sets.at:325: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --trace=grammar -o input.c input.y --warnings=none -Werror --trace=none 173. sets.at:315: ok 174. sets.at:394: ok 175. reduce.at:26: testing Useless Terminals ... ./reduce.at:47: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret input.y 176. reduce.at:70: testing Useless Nonterminals ... ./reduce.at:89: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret input.y ./reduce.at:89: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Werror ./reduce.at:49: sed -n '/^Grammar/q;/^$/!p' input.output 175. reduce.at:26: ok 177. reduce.at:120: testing Useless Rules ... ./reduce.at:146: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret input.y stderr: input.y: error: 3 nonterminals useless in grammar [-Werror=other] input.y: error: 3 rules useless in grammar [-Werror=other] input.y:11.1-8: error: nonterminal useless in grammar: useless1 [-Werror=other] input.y:12.1-8: error: nonterminal useless in grammar: useless2 [-Werror=other] input.y:13.1-8: error: nonterminal useless in grammar: useless3 [-Werror=other] ./reduce.at:89: sed 's,.*/$,,' stderr 1>&2 ./reduce.at:89: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=error ./reduce.at:146: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Werror ./reduce.at:89: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y -Wnone,none -Werror --trace=none ./reduce.at:89: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y --warnings=none -Werror --trace=none stderr: input.y: error: 9 nonterminals useless in grammar [-Werror=other] input.y: error: 9 rules useless in grammar [-Werror=other] input.y:10.1-8: error: nonterminal useless in grammar: useless1 [-Werror=other] 10 | useless1: '1'; | ^~~~~~~~ input.y:11.1-8: error: nonterminal useless in grammar: useless2 [-Werror=other] 11 | useless2: '2'; | ^~~~~~~~ input.y:12.1-8: error: nonterminal useless in grammar: useless3 [-Werror=other] 12 | useless3: '3'; | ^~~~~~~~ input.y:13.1-8: error: nonterminal useless in grammar: useless4 [-Werror=other] 13 | useless4: '4'; | ^~~~~~~~ input.y:14.1-8: error: nonterminal useless in grammar: useless5 [-Werror=other] 14 | useless5: '5'; | ^~~~~~~~ input.y:15.1-8: error: nonterminal useless in grammar: useless6 [-Werror=other] 15 | useless6: '6'; | ^~~~~~~~ input.y:16.1-8: error: nonterminal useless in grammar: useless7 [-Werror=other] 16 | useless7: '7'; | ^~~~~~~~ input.y:17.1-8: error: nonterminal useless in grammar: useless8 [-Werror=other] 17 | useless8: '8'; | ^~~~~~~~ input.y:18.1-8: error: nonterminal useless in grammar: useless9 [-Werror=other] 18 | useless9: '9'; | ^~~~~~~~ ./reduce.at:146: sed 's,.*/$,,' stderr 1>&2 ./reduce.at:146: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=error ./reduce.at:97: sed -n '/^Grammar/q;/^$/!p' input.output ./reduce.at:109: $CC $CFLAGS $CPPFLAGS -c -o input.o input.c ./reduce.at:146: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Wnone,none -Werror --trace=none ./reduce.at:146: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=none -Werror --trace=none stderr: stdout: 176. reduce.at:70: ok 178. reduce.at:224: testing Useless Parts ... ./reduce.at:261: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -rall -o input.c input.y ./reduce.at:261: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -rall -o input.c input.y -Werror ./reduce.at:179: sed -n '/^Grammar/q;/^$/!p' input.output ./reduce.at:213: $CC $CFLAGS $CPPFLAGS -c -o input.o input.c stderr: input.y: error: 1 nonterminal useless in grammar [-Werror=other] input.y: error: 1 rule useless in grammar [-Werror=other] input.y:18.1-6: error: nonterminal useless in grammar: unused [-Werror=other] 18 | unused | ^~~~~~ ./reduce.at:261: sed 's,.*/$,,' stderr 1>&2 ./reduce.at:261: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -rall -o input.c input.y --warnings=error ./reduce.at:261: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -rall -o input.c input.y -Wnone,none -Werror --trace=none ./reduce.at:261: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -rall -o input.c input.y --warnings=none -Werror --trace=none stderr: stdout: 177. reduce.at:120: ok ./reduce.at:270: sed -n '/^State 0/q;/^$/!p' input.output 179. reduce.at:312: testing Reduced Automaton ... ./reduce.at:341: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret not-reduced.y ./reduce.at:298: $CC $CFLAGS $CPPFLAGS -c -o input.o input.c ./reduce.at:341: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret not-reduced.y -Werror stderr: not-reduced.y: error: 2 nonterminals useless in grammar [-Werror=other] not-reduced.y: error: 3 rules useless in grammar [-Werror=other] not-reduced.y:14.1-13: error: nonterminal useless in grammar: not_reachable [-Werror=other] 14 | not_reachable: useful { /* A not reachable action. */ } | ^~~~~~~~~~~~~ not-reduced.y:17.1-14: error: nonterminal useless in grammar: non_productive [-Werror=other] 17 | non_productive: non_productive useless_token | ^~~~~~~~~~~~~~ not-reduced.y:11.6-57: error: rule useless in grammar [-Werror=other] 11 | | non_productive { /* A non productive action. */ } | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stderr: stdout: ./reduce.at:341: sed 's,.*/$,,' stderr 1>&2 178. reduce.at:224: ok ./reduce.at:341: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret not-reduced.y --warnings=error 180. reduce.at:406: testing Underivable Rules ... ./reduce.at:420: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret input.y ./reduce.at:341: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret not-reduced.y -Wnone,none -Werror --trace=none ./reduce.at:420: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Werror ./reduce.at:341: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret not-reduced.y --warnings=none -Werror --trace=none stderr: input.y: error: 2 nonterminals useless in grammar [-Werror=other] input.y: error: 3 rules useless in grammar [-Werror=other] input.y:6.1-11: error: nonterminal useless in grammar: underivable [-Werror=other] 6 | underivable: indirection; | ^~~~~~~~~~~ input.y:7.1-11: error: nonterminal useless in grammar: indirection [-Werror=other] 7 | indirection: underivable; | ^~~~~~~~~~~ input.y:5.15-25: error: rule useless in grammar [-Werror=other] 5 | exp: useful | underivable; | ^~~~~~~~~~~ stderr: stdout: ./reduce.at:420: sed 's,.*/$,,' stderr 1>&2 ./output.at:835: $CXX $CPPFLAGS $CXXFLAGS -Iout/ $LDFLAGS -o parser out/x[12].o main.cc $LIBS ./reduce.at:420: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=error ./reduce.at:355: sed -n '/^Grammar/q;/^$/!p' not-reduced.output ./reduce.at:392: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret reduced.y ./reduce.at:420: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y -Wnone,none -Werror --trace=none ./reduce.at:396: sed 's/not-reduced/reduced/g' not-reduced.c 179. reduce.at:312: ok ./reduce.at:420: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.y --warnings=none -Werror --trace=none 181. reduce.at:452: testing Bad start symbols ... ./reduce.at:467: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y ./reduce.at:473: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y ./reduce.at:480: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y ./reduce.at:488: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y ./reduce.at:497: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y ./reduce.at:434: sed -n '/^Grammar/q;/^$/!p' input.output ./reduce.at:505: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y 180. reduce.at:406: ok 181. reduce.at:452: ok 182. reduce.at:550: testing no lr.type: Single State Split ... ./reduce.at:550: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y 183. reduce.at:550: testing lr.type=lalr: Single State Split ... ./reduce.at:550: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y ./reduce.at:550: sed -n '/^State 0$/,$p' input.output ./reduce.at:550: sed -n '/^State 0$/,$p' input.output ./reduce.at:550: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./reduce.at:550: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./reduce.at:550: $PREPARSER ./input stderr: syntax error ./reduce.at:550: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 183. reduce.at:550: ok 184. reduce.at:550: testing lr.type=ielr: Single State Split ... ./reduce.at:550: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y stderr: stdout: ./reduce.at:550: $PREPARSER ./input stderr: syntax error ./reduce.at:550: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 182. reduce.at:550: ok ./reduce.at:550: sed -n '/^State 0$/,$p' input.output 185. reduce.at:550: testing lr.type=canonical-lr: Single State Split ... ./reduce.at:550: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y ./reduce.at:550: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./reduce.at:550: sed -n '/^State 0$/,$p' input.output ./reduce.at:550: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./reduce.at:550: $PREPARSER ./input stderr: ./reduce.at:550: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 184. reduce.at:550: ok 186. reduce.at:783: testing no lr.type: Lane Split ... ./reduce.at:783: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y stderr: stdout: ./reduce.at:550: $PREPARSER ./input stderr: ./reduce.at:550: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 185. reduce.at:550: ok ./reduce.at:783: sed -n '/^State 0$/,$p' input.output ./reduce.at:783: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS 187. reduce.at:783: testing lr.type=lalr: Lane Split ... ./reduce.at:783: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y ./reduce.at:783: sed -n '/^State 0$/,$p' input.output ./reduce.at:783: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./output.at:836: $PREPARSER ./parser stderr: ./output.at:836: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 144. output.at:744: ok 188. reduce.at:783: testing lr.type=ielr: Lane Split ... ./reduce.at:783: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y stderr: stdout: ./reduce.at:783: $PREPARSER ./input stderr: syntax error ./reduce.at:783: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 186. reduce.at:783: ok 189. reduce.at:783: testing lr.type=canonical-lr: Lane Split ... ./reduce.at:783: sed -n '/^State 0$/,$p' input.output ./reduce.at:783: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y ./reduce.at:783: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./reduce.at:783: $PREPARSER ./input stderr: syntax error ./reduce.at:783: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 187. reduce.at:783: ok ./reduce.at:783: sed -n '/^State 0$/,$p' input.output 190. reduce.at:1027: testing no lr.type: Complex Lane Split ... ./reduce.at:1027: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y ./reduce.at:783: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./reduce.at:1027: sed -n '/^State 0$/,$p' input.output ./reduce.at:1027: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./reduce.at:783: $PREPARSER ./input stderr: ./reduce.at:783: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 188. reduce.at:783: ok 191. reduce.at:1027: testing lr.type=lalr: Complex Lane Split ... ./reduce.at:1027: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y stderr: stdout: ./reduce.at:783: $PREPARSER ./input stderr: ./reduce.at:783: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./reduce.at:1027: sed -n '/^State 0$/,$p' input.output 189. reduce.at:783: ok ./reduce.at:1027: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS 192. reduce.at:1027: testing lr.type=ielr: Complex Lane Split ... ./reduce.at:1027: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y ./reduce.at:1027: sed -n '/^State 0$/,$p' input.output stderr: ./reduce.at:1027: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stdout: ./reduce.at:1027: $PREPARSER ./input stderr: syntax error ./reduce.at:1027: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 190. reduce.at:1027: ok 193. reduce.at:1027: testing lr.type=canonical-lr: Complex Lane Split ... ./reduce.at:1027: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y ./reduce.at:1027: sed -n '/^State 0$/,$p' input.output ./reduce.at:1027: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./reduce.at:1027: $PREPARSER ./input stderr: syntax error ./reduce.at:1027: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 191. reduce.at:1027: ok 194. reduce.at:1296: testing no lr.type: Split During Added Lookahead Propagation ... ./reduce.at:1296: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y stderr: stdout: ./reduce.at:1027: $PREPARSER ./input stderr: ./reduce.at:1027: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 192. reduce.at:1027: ok ./reduce.at:1296: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Werror 195. reduce.at:1296: testing lr.type=lalr: Split During Added Lookahead Propagation ... ./reduce.at:1296: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y ./reduce.at:1296: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Werror stderr: input.y: error: 1 reduce/reduce conflict [-Werror=conflicts-rr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples ./reduce.at:1296: sed 's,.*/$,,' stderr 1>&2 ./reduce.at:1296: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=error stderr: stdout: ./reduce.at:1027: $PREPARSER ./input stderr: ./reduce.at:1027: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 193. reduce.at:1027: ok ./reduce.at:1296: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Wnone,none -Werror --trace=none 196. reduce.at:1296: testing lr.type=ielr: Split During Added Lookahead Propagation ... ./reduce.at:1296: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y stderr: input.y: error: 1 reduce/reduce conflict [-Werror=conflicts-rr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples ./reduce.at:1296: sed 's,.*/$,,' stderr 1>&2 ./reduce.at:1296: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=error ./reduce.at:1296: sed -n '/^State 0$/,$p' input.output ./reduce.at:1296: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./reduce.at:1296: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=none -Werror --trace=none ./reduce.at:1296: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Wnone,none -Werror --trace=none ./reduce.at:1296: sed -n '/^State 0$/,$p' input.output ./reduce.at:1296: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=none -Werror --trace=none ./reduce.at:1296: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./reduce.at:1296: sed -n '/^State 0$/,$p' input.output ./reduce.at:1296: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./reduce.at:1296: $PREPARSER ./input stderr: ./reduce.at:1296: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 196. reduce.at:1296: ok 197. reduce.at:1296: testing lr.type=canonical-lr: Split During Added Lookahead Propagation ... ./reduce.at:1296: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y stderr: stdout: ./reduce.at:1296: $PREPARSER ./input stderr: syntax error ./reduce.at:1296: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 194. reduce.at:1296: ok ./reduce.at:1296: sed -n '/^State 0$/,$p' input.output ./reduce.at:1296: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS 198. reduce.at:1627: testing no lr.default-reduction ... ./reduce.at:1627: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y ./reduce.at:1627: sed -n '/^State 0$/,$p' input.output ./reduce.at:1627: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./reduce.at:1296: $PREPARSER ./input stderr: syntax error ./reduce.at:1296: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 195. reduce.at:1296: ok 199. reduce.at:1627: testing lr.default-reduction=most ... ./reduce.at:1627: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y stderr: stdout: ./reduce.at:1296: $PREPARSER ./input ./reduce.at:1627: sed -n '/^State 0$/,$p' input.output stderr: ./reduce.at:1296: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 197. reduce.at:1296: ok ./reduce.at:1627: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS 200. reduce.at:1627: testing lr.default-reduction=consistent ... ./reduce.at:1627: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y stderr: stdout: ./reduce.at:1627: $PREPARSER ./input stderr: ./reduce.at:1627: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 198. reduce.at:1627: ok ./reduce.at:1627: sed -n '/^State 0$/,$p' input.output ./reduce.at:1627: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS 201. reduce.at:1627: testing lr.default-reduction=accepting ... ./reduce.at:1627: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y ./reduce.at:1627: sed -n '/^State 0$/,$p' input.output ./reduce.at:1627: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./reduce.at:1627: $PREPARSER ./input stderr: ./reduce.at:1627: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 199. reduce.at:1627: ok stderr: stdout: 202. report.at:37: testing Reports ... ./reduce.at:1627: $PREPARSER ./input 202. report.at:37: skipped (report.at:75) stderr: ./reduce.at:1627: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 200. reduce.at:1627: ok 203. report.at:3123: testing Reports with conflicts ... 203. report.at:3123: skipped (report.at:3132) 204. conflicts.at:28: testing Token declaration order ... ./conflicts.at:81: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 205. conflicts.at:101: testing Token declaration order: literals vs. identifiers ... ./conflicts.at:130: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --report=all -o input.c input.y stderr: ./conflicts.at:131: cat input.output | sed -n '/^State 0$/,/^State 1$/p' stderr: stdout: 205. conflicts.at:101: ok ./reduce.at:1627: $PREPARSER ./input stderr: ./reduce.at:1627: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:82: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS 201. reduce.at:1627: ok 206. conflicts.at:183: testing Useless associativity warning ... ./conflicts.at:205: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wprecedence input.y 207. conflicts.at:218: testing Useless precedence warning ... ./conflicts.at:248: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wprecedence -fcaret -o input.c input.y ./conflicts.at:205: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wprecedence input.y -Werror ./conflicts.at:248: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wprecedence -fcaret -o input.c input.y -Werror stderr: stdout: ./conflicts.at:84: $PREPARSER ./input stderr: stderr: ./conflicts.at:84: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input.y:2.1-9: error: useless precedence and associativity for "=" [-Werror=precedence] input.y:4.1-5: error: useless associativity for "*", use %precedence [-Werror=precedence] input.y:5.1-11: error: useless precedence for "(" [-Werror=precedence] stderr: 204. conflicts.at:28: ok ./conflicts.at:205: sed 's,.*/$,,' stderr 1>&2 input.y:7.1-9: error: useless precedence and associativity for U [-Werror=precedence] 7 | %nonassoc U | ^~~~~~~~~ input.y:6.1-6: error: useless precedence and associativity for V [-Werror=precedence] 6 | %right V | ^~~~~~ input.y:5.1-5: error: useless precedence and associativity for W [-Werror=precedence] 5 | %left W | ^~~~~ input.y:2.1-11: error: useless precedence for Z [-Werror=precedence] 2 | %precedence Z | ^~~~~~~~~~~ ./conflicts.at:248: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:205: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wprecedence input.y --warnings=error ./conflicts.at:248: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wprecedence -fcaret -o input.c input.y --warnings=error 208. conflicts.at:275: testing S/R in initial ... ./conflicts.at:284: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./conflicts.at:248: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wprecedence -fcaret -o input.c input.y -Wnone,none -Werror --trace=none ./conflicts.at:205: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wprecedence input.y -Wnone,none -Werror --trace=none ./conflicts.at:284: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y -Werror ./conflicts.at:248: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wprecedence -fcaret -o input.c input.y --warnings=none -Werror --trace=none stderr: input.y:4.10-15: error: rule useless in parser due to conflicts [-Werror=other] ./conflicts.at:284: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:205: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wprecedence input.y --warnings=none -Werror --trace=none ./conflicts.at:284: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y --warnings=error 207. conflicts.at:218: ok ./conflicts.at:284: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y -Wnone,none -Werror --trace=none 209. conflicts.at:301: testing %nonassoc and eof ... ./conflicts.at:368: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 206. conflicts.at:183: ok ./conflicts.at:368: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS 210. conflicts.at:509: testing parse.error=verbose and consistent errors: lr.type=ielr ... ./conflicts.at:509: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./conflicts.at:284: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y --warnings=none -Werror --trace=none ./conflicts.at:509: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./conflicts.at:288: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -o input.c input.y ./conflicts.at:288: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y -Werror stderr: input.y:4.10-15: error: rule useless in parser due to conflicts [-Werror=other] 4 | e: 'e' | %empty; | ^~~~~~ ./conflicts.at:288: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:288: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y --warnings=error ./conflicts.at:288: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y -Wnone,none -Werror --trace=none ./conflicts.at:288: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y --warnings=none -Werror --trace=none 208. conflicts.at:275: ok 211. conflicts.at:513: testing parse.error=verbose and consistent errors: lr.type=ielr %glr-parser ... ./conflicts.at:513: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y stderr: stdout: ./conflicts.at:368: $PREPARSER ./input '0<0' stderr: ./conflicts.at:368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:368: $PREPARSER ./input '0<0<0' ./conflicts.at:513: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: syntax error, unexpected '<' ./conflicts.at:368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:368: $PREPARSER ./input '0>0' stderr: ./conflicts.at:368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:368: $PREPARSER ./input '0>0>0' stderr: syntax error, unexpected '>' ./conflicts.at:368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:368: $PREPARSER ./input '0<0>0' stderr: syntax error, unexpected '>' ./conflicts.at:368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:372: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dlr.default-reduction=consistent -o input.c input.y stderr: stdout: ./conflicts.at:509: $PREPARSER ./input stderr: syntax error, unexpected end of file ./conflicts.at:509: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:372: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS 210. conflicts.at:509: ok 212. conflicts.at:518: testing parse.error=verbose and consistent errors: lr.type=ielr c++ ... ./conflicts.at:518: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./conflicts.at:518: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./conflicts.at:372: $PREPARSER ./input '0<0' stderr: ./conflicts.at:372: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:372: $PREPARSER ./input '0<0<0' stderr: syntax error, unexpected '<', expecting end of file ./conflicts.at:372: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:372: $PREPARSER ./input '0>0' stderr: ./conflicts.at:372: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:372: $PREPARSER ./input '0>0>0' stderr: syntax error, unexpected '>', expecting end of file ./conflicts.at:372: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:372: $PREPARSER ./input '0<0>0' stderr: syntax error, unexpected '>', expecting end of file ./conflicts.at:372: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:381: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dlr.type=canonical-lr -o input.c input.y ./conflicts.at:381: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./conflicts.at:381: $PREPARSER ./input '0<0' stderr: ./conflicts.at:381: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:381: $PREPARSER ./input '0<0<0' stderr: syntax error, unexpected '<', expecting end of file ./conflicts.at:381: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:381: $PREPARSER ./input '0>0' stderr: stdout: ./conflicts.at:513: $PREPARSER ./input stderr: stderr: ./conflicts.at:381: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr syntax error, unexpected end of file ./conflicts.at:513: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:381: $PREPARSER ./input '0>0>0' 211. conflicts.at:513: ok stderr: syntax error, unexpected '>', expecting end of file ./conflicts.at:381: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:381: $PREPARSER ./input '0<0>0' stderr: syntax error, unexpected '>', expecting end of file ./conflicts.at:381: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:388: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dparse.lac=full -o input.c input.y 213. conflicts.at:523: testing parse.error=verbose and consistent errors: lr.type=ielr java ... ./conflicts.at:523: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.java input.y ./conflicts.at:388: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS 213. conflicts.at:523: skipped (conflicts.at:523) 214. conflicts.at:530: testing parse.error=verbose and consistent errors: lr.type=ielr lr.default-reduction=consistent ... ./conflicts.at:530: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./conflicts.at:530: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./conflicts.at:388: $PREPARSER ./input '0<0' stderr: ./conflicts.at:388: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:388: $PREPARSER ./input '0<0<0' stderr: syntax error, unexpected '<', expecting end of file ./conflicts.at:388: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:388: $PREPARSER ./input '0>0' stderr: ./conflicts.at:388: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:388: $PREPARSER ./input '0>0>0' stderr: syntax error, unexpected '>', expecting end of file ./conflicts.at:388: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:388: $PREPARSER ./input '0<0>0' stderr: syntax error, unexpected '>', expecting end of file ./conflicts.at:388: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 209. conflicts.at:301: ok stderr: stdout: ./conflicts.at:530: $PREPARSER ./input stderr: syntax error, unexpected end of file, expecting 'a' or 'b' ./conflicts.at:530: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 215. conflicts.at:535: testing parse.error=verbose and consistent errors: lr.type=ielr lr.default-reduction=accepting ... ./conflicts.at:535: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 214. conflicts.at:530: ok 216. conflicts.at:540: testing parse.error=verbose and consistent errors: lr.type=canonical-lr ... ./conflicts.at:540: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./conflicts.at:535: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./conflicts.at:540: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./conflicts.at:535: $PREPARSER ./input stderr: stdout: stderr: ./conflicts.at:518: $PREPARSER ./input syntax error, unexpected end of file, expecting 'a' or 'b' ./conflicts.at:535: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected end of file ./conflicts.at:518: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 215. conflicts.at:535: ok 212. conflicts.at:518: ok 217. conflicts.at:546: testing parse.error=verbose and consistent errors: lr.type=canonical-lr parse.lac=full ... ./conflicts.at:546: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y stderr: 218. conflicts.at:551: testing parse.error=verbose and consistent errors: lr.type=ielr parse.lac=full ... ./conflicts.at:551: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y stdout: ./conflicts.at:540: $PREPARSER ./input stderr: syntax error, unexpected end of file, expecting 'a' or 'b' ./conflicts.at:540: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 216. conflicts.at:540: ok 219. conflicts.at:558: testing parse.error=verbose and consistent errors: c++ lr.type=canonical-lr parse.lac=full ... ./conflicts.at:558: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./conflicts.at:546: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./conflicts.at:551: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./conflicts.at:558: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./conflicts.at:551: $PREPARSER ./input stderr: syntax error, unexpected end of file, expecting 'b' ./conflicts.at:551: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 218. conflicts.at:551: ok stderr: stdout: ./conflicts.at:546: $PREPARSER ./input stderr: syntax error, unexpected end of file, expecting 'b' ./conflicts.at:546: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 217. conflicts.at:546: ok 220. conflicts.at:564: testing parse.error=verbose and consistent errors: c++ lr.type=ielr parse.lac=full ... ./conflicts.at:564: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y 221. conflicts.at:622: testing parse.error=verbose and consistent errors: ... ./conflicts.at:622: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./conflicts.at:564: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS ./conflicts.at:622: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./conflicts.at:622: $PREPARSER ./input stderr: syntax error, unexpected 'b' ./conflicts.at:622: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 221. conflicts.at:622: ok 222. conflicts.at:626: testing parse.error=verbose and consistent errors: %glr-parser ... ./conflicts.at:626: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./conflicts.at:626: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./conflicts.at:558: $PREPARSER ./input stderr: syntax error, unexpected end of file, expecting 'b' ./conflicts.at:558: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 219. conflicts.at:558: ok 223. conflicts.at:632: testing parse.error=verbose and consistent errors: lr.default-reduction=consistent ... ./conflicts.at:632: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y stderr: ./conflicts.at:632: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stdout: ./conflicts.at:626: $PREPARSER ./input stderr: syntax error, unexpected 'b' ./conflicts.at:626: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 222. conflicts.at:626: ok 224. conflicts.at:638: testing parse.error=verbose and consistent errors: lr.default-reduction=accepting ... ./conflicts.at:638: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./conflicts.at:638: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./conflicts.at:564: $PREPARSER ./input stderr: syntax error, unexpected end of file, expecting 'b' ./conflicts.at:564: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 220. conflicts.at:564: ok 225. conflicts.at:642: testing parse.error=verbose and consistent errors: lr.type=canonical-lr ... ./conflicts.at:642: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./conflicts.at:642: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./conflicts.at:632: $PREPARSER ./input stderr: syntax error, unexpected 'b' ./conflicts.at:632: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 223. conflicts.at:632: ok 226. conflicts.at:647: testing parse.error=verbose and consistent errors: parse.lac=full ... ./conflicts.at:647: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./conflicts.at:647: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./conflicts.at:638: $PREPARSER ./input stderr: syntax error, unexpected end of file, expecting 'a' ./conflicts.at:638: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 224. conflicts.at:638: ok 227. conflicts.at:651: testing parse.error=verbose and consistent errors: parse.lac=full lr.default-reduction=accepting ... ./conflicts.at:651: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./conflicts.at:651: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./conflicts.at:642: $PREPARSER ./input stderr: syntax error, unexpected end of file, expecting 'a' ./conflicts.at:642: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 225. conflicts.at:642: ok 228. conflicts.at:676: testing LAC: %nonassoc requires splitting canonical LR states ... ./conflicts.at:726: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dlr.type=canonical-lr -o input.c input.y ./conflicts.at:726: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Dlr.type=canonical-lr -o input.c input.y -Werror stderr: input.y: error: 2 shift/reduce conflicts [-Werror=conflicts-sr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples ./conflicts.at:726: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:726: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Dlr.type=canonical-lr -o input.c input.y --warnings=error ./conflicts.at:726: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Dlr.type=canonical-lr -o input.c input.y -Wnone,none -Werror --trace=none ./conflicts.at:726: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Dlr.type=canonical-lr -o input.c input.y --warnings=none -Werror --trace=none stderr: stdout: ./conflicts.at:647: $PREPARSER ./input stderr: syntax error, unexpected 'b' ./conflicts.at:647: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:731: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS 226. conflicts.at:647: ok 229. conflicts.at:764: testing Unresolved SR Conflicts ... ./conflicts.at:774: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c --report=all input.y stderr: stdout: ./conflicts.at:651: $PREPARSER ./input stderr: syntax error, unexpected end of file ./conflicts.at:651: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:774: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c --report=all input.y -Werror 227. conflicts.at:651: ok 230. conflicts.at:887: testing Resolved SR Conflicts ... ./conflicts.at:898: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c --report=all input.y stderr: input.y: error: 1 shift/reduce conflict [-Werror=conflicts-sr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples ./conflicts.at:774: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:774: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c --report=all input.y --warnings=error ./conflicts.at:901: cat input.output 230. conflicts.at:887: ok 231. conflicts.at:989: testing %precedence suffices ... ./conflicts.at:1006: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./conflicts.at:774: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c --report=all input.y -Wnone,none -Werror --trace=none 231. conflicts.at:989: ok ./conflicts.at:774: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c --report=all input.y --warnings=none -Werror --trace=none 232. conflicts.at:1015: testing %precedence does not suffice ... ./conflicts.at:1033: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./conflicts.at:780: cat input.output 229. conflicts.at:764: ok 233. conflicts.at:1096: testing Syntax error in consistent error state: yacc.c ... ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./conflicts.at:1033: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y -Werror stderr: input.y: error: 1 shift/reduce conflict [-Werror=conflicts-sr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples input.y:12.3-18: error: rule useless in parser due to conflicts [-Werror=other] ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y -Werror ./conflicts.at:1033: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:1033: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y --warnings=error stderr: stdout: ./conflicts.at:732: $PREPARSER ./input stderr: syntax error, unexpected 'a', expecting 'b' ./conflicts.at:732: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input.y:17.5-25: error: rule useless in parser due to conflicts [-Werror=other] input.y:18.5-29: error: rule useless in parser due to conflicts [-Werror=other] ./conflicts.at:737: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dlr.type=canonical-lr -Dparse.lac=full \ -o input.c input.y ./conflicts.at:1096: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y --warnings=error ./conflicts.at:1033: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y -Wnone,none -Werror --trace=none ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y -Wnone,none -Werror --trace=none ./conflicts.at:737: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Dlr.type=canonical-lr -Dparse.lac=full \ -o input.c input.y -Werror ./conflicts.at:1033: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y --warnings=none -Werror --trace=none ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y --warnings=none -Werror --trace=none stderr: input.y: error: 2 shift/reduce conflicts [-Werror=conflicts-sr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples 232. conflicts.at:1015: ok ./conflicts.at:737: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:737: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Dlr.type=canonical-lr -Dparse.lac=full \ -o input.c input.y --warnings=error 234. conflicts.at:1096: testing Syntax error in consistent error state: glr.c ... ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./conflicts.at:1096: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./conflicts.at:737: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Dlr.type=canonical-lr -Dparse.lac=full \ -o input.c input.y -Wnone,none -Werror --trace=none ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y -Werror ./conflicts.at:737: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Dlr.type=canonical-lr -Dparse.lac=full \ -o input.c input.y --warnings=none -Werror --trace=none stderr: input.y:17.5-25: error: rule useless in parser due to conflicts [-Werror=other] input.y:18.5-29: error: rule useless in parser due to conflicts [-Werror=other] ./conflicts.at:1096: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y --warnings=error ./conflicts.at:742: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y -Wnone,none -Werror --trace=none ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y --warnings=none -Werror --trace=none ./conflicts.at:1096: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./conflicts.at:1096: $PREPARSER ./input stderr: syntax error ./conflicts.at:1096: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 233. conflicts.at:1096: ok 235. conflicts.at:1096: testing Syntax error in consistent error state: lalr1.cc ... ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.cc input.y -Werror stderr: input.y:17.5-25: error: rule useless in parser due to conflicts [-Werror=other] input.y:18.5-29: error: rule useless in parser due to conflicts [-Werror=other] ./conflicts.at:1096: sed 's,.*/$,,' stderr 1>&2 stderr: stdout: ./conflicts.at:743: $PREPARSER ./input stderr: syntax error, unexpected 'a', expecting 'b' or 'c' ./conflicts.at:743: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.cc input.y --warnings=error ./conflicts.at:748: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dlr.type=ielr -Dparse.lac=full -o input.c input.y ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.cc input.y -Wnone,none -Werror --trace=none ./conflicts.at:748: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Dlr.type=ielr -Dparse.lac=full -o input.c input.y -Werror ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.cc input.y --warnings=none -Werror --trace=none stderr: input.y: error: 2 shift/reduce conflicts [-Werror=conflicts-sr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples ./conflicts.at:748: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:1096: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS ./conflicts.at:748: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Dlr.type=ielr -Dparse.lac=full -o input.c input.y --warnings=error ./conflicts.at:748: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Dlr.type=ielr -Dparse.lac=full -o input.c input.y -Wnone,none -Werror --trace=none ./conflicts.at:748: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Dlr.type=ielr -Dparse.lac=full -o input.c input.y --warnings=none -Werror --trace=none ./conflicts.at:753: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./conflicts.at:754: $PREPARSER ./input stderr: syntax error, unexpected 'a', expecting 'b' or 'c' ./conflicts.at:754: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 228. conflicts.at:676: ok stderr: stdout: ./conflicts.at:1096: $PREPARSER ./input stderr: syntax error ./conflicts.at:1096: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 234. conflicts.at:1096: ok 236. conflicts.at:1096: testing Syntax error in consistent error state: glr.cc ... ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y 237. conflicts.at:1096: testing Syntax error in consistent error state: glr2.cc ... ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.cc input.y -Werror ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.cc input.y -Werror stderr: input.y:17.5-25: error: rule useless in parser due to conflicts [-Werror=other] input.y:18.5-29: error: rule useless in parser due to conflicts [-Werror=other] stderr: ./conflicts.at:1096: sed 's,.*/$,,' stderr 1>&2 input.y:17.5-25: error: rule useless in parser due to conflicts [-Werror=other] input.y:18.5-29: error: rule useless in parser due to conflicts [-Werror=other] ./conflicts.at:1096: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.cc input.y --warnings=error ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.cc input.y --warnings=error ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.cc input.y -Wnone,none -Werror --trace=none ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.cc input.y -Wnone,none -Werror --trace=none ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.cc input.y --warnings=none -Werror --trace=none ./conflicts.at:1096: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.cc input.y --warnings=none -Werror --trace=none ./conflicts.at:1096: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS ./conflicts.at:1096: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./conflicts.at:1096: $PREPARSER ./input stderr: syntax error ./conflicts.at:1096: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 235. conflicts.at:1096: ok 238. conflicts.at:1127: testing Defaulted Conflicted Reduction ... ./conflicts.at:1138: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c --report=all input.y ./conflicts.at:1138: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c --report=all input.y -Werror stderr: input.y: error: 1 reduce/reduce conflict [-Werror=conflicts-rr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples input.y:4.6-8: error: rule useless in parser due to conflicts [-Werror=other] ./conflicts.at:1138: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:1138: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c --report=all input.y --warnings=error ./conflicts.at:1138: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c --report=all input.y -Wnone,none -Werror --trace=none ./conflicts.at:1138: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c --report=all input.y --warnings=none -Werror --trace=none ./conflicts.at:1145: cat input.output 238. conflicts.at:1127: ok 239. conflicts.at:1264: testing %expect not enough ... ./conflicts.at:1273: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y 239. conflicts.at:1264: ok 240. conflicts.at:1284: testing %expect right ... ./conflicts.at:1293: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 240. conflicts.at:1284: ok 241. conflicts.at:1301: testing %expect too much ... ./conflicts.at:1310: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y 241. conflicts.at:1301: ok 242. conflicts.at:1321: testing %expect with reduce conflicts ... ./conflicts.at:1330: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y 242. conflicts.at:1321: ok 243. conflicts.at:1341: testing %expect in grammar rule not enough ... ./conflicts.at:1350: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y 243. conflicts.at:1341: ok 244. conflicts.at:1360: testing %expect in grammar rule right ... ./conflicts.at:1369: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 244. conflicts.at:1360: ok 245. conflicts.at:1377: testing %expect in grammar rules ... ./conflicts.at:1388: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c -rall input.y 245. conflicts.at:1377: ok 246. conflicts.at:1396: testing %expect in grammar rule too much ... ./conflicts.at:1405: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y 246. conflicts.at:1396: ok 247. conflicts.at:1415: testing %expect-rr in grammar rule ... ./conflicts.at:1432: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 247. conflicts.at:1415: ok 248. conflicts.at:1440: testing %expect-rr too much in grammar rule ... ./conflicts.at:1457: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y 248. conflicts.at:1440: ok 249. conflicts.at:1469: testing %expect-rr not enough in grammar rule ... ./conflicts.at:1486: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y 249. conflicts.at:1469: ok 250. conflicts.at:1498: testing %prec with user string ... ./conflicts.at:1507: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 250. conflicts.at:1498: ok 251. conflicts.at:1515: testing %no-default-prec without %prec ... ./conflicts.at:1531: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall -o input.c input.y ./conflicts.at:1531: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall -o input.c input.y -Werror stderr: input.y: error: 4 shift/reduce conflicts [-Werror=conflicts-sr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples input.y:1.1-5: error: useless precedence and associativity for '+' [-Werror=precedence] input.y:2.1-5: error: useless precedence and associativity for '*' [-Werror=precedence] ./conflicts.at:1531: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:1531: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall -o input.c input.y --warnings=error ./conflicts.at:1531: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall -o input.c input.y -Wnone,none -Werror --trace=none ./conflicts.at:1531: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall -o input.c input.y --warnings=none -Werror --trace=none 251. conflicts.at:1515: ok stderr: stdout: ./conflicts.at:1096: $PREPARSER ./input stderr: syntax error ./conflicts.at:1096: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 236. conflicts.at:1096: ok 252. conflicts.at:1544: testing %no-default-prec with %prec ... ./conflicts.at:1560: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 253. conflicts.at:1568: testing %default-prec ... ./conflicts.at:1584: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 252. conflicts.at:1544: ok 254. conflicts.at:1592: testing Unreachable States After Conflict Resolution ... ./conflicts.at:1638: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --report=all input.y 253. conflicts.at:1568: ok 255. conflicts.at:1855: testing Solved conflicts report for multiple reductions in a state ... ./conflicts.at:1881: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --report=all -o input.c input.y ./conflicts.at:1638: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --report=all input.y -Werror stderr: input.y:7.5-7: warning: rule useless in parser due to conflicts [-Wother] input.y:11.5-7: warning: rule useless in parser due to conflicts [-Wother] input.y:17.11-26: warning: rule useless in parser due to conflicts [-Wother] input.y:18.11-26: warning: rule useless in parser due to conflicts [-Wother] input.y:19.11-26: warning: rule useless in parser due to conflicts [-Wother] ./conflicts.at:1882: cat input.output | sed -n '/^State 0$/,/^State 1$/p' 255. conflicts.at:1855: ok stderr: input.y: error: 1 shift/reduce conflict [-Werror=conflicts-sr] input.y: error: 1 reduce/reduce conflict [-Werror=conflicts-rr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples input.y:12.5-20: error: rule useless in parser due to conflicts [-Werror=other] input.y:20.5-20: error: rule useless in parser due to conflicts [-Werror=other] input.y:21.4: error: rule useless in parser due to conflicts [-Werror=other] input.y:25.14: error: rule useless in parser due to conflicts [-Werror=other] input.y:25.16: error: rule useless in parser due to conflicts [-Werror=other] input.y:31.5-7: error: rule useless in parser due to conflicts [-Werror=other] input.y:32.4: error: rule useless in parser due to conflicts [-Werror=other] 256. conflicts.at:1935: testing %nonassoc error actions for multiple reductions in a state ... ./conflicts.at:1638: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:1959: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --trace=cex -fcaret --report=all -o input.c input.y ./conflicts.at:1638: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --report=all input.y --warnings=error ./conflicts.at:1638: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --report=all input.y -Wnone,none -Werror --trace=none ./conflicts.at:1959: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --trace=cex -fcaret --report=all -o input.c input.y -Werror ./conflicts.at:1638: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --report=all input.y --warnings=none -Werror --trace=none stderr: bison (GNU Bison) 3.8.2 init: 0.000000 # state items: 26 State 0: 0 $accept: . start $end -> 0 $accept: start . $end => 2 start: . empty_a 'a' => 4 start: . empty_b 'b' => 6 start: . empty_c1 'c' => 7 start: . empty_c2 'c' => 8 start: . empty_c3 'c' 1 start: . 'a' DISABLED 2 start: . empty_a 'a' -> 2 start: empty_a . 'a' => 9 empty_a: %empty . <- 0 $accept: . start $end 3 start: . 'b' DISABLED 4 start: . empty_b 'b' -> 4 start: empty_b . 'b' => 10 empty_b: %empty . <- 0 $accept: . start $end 5 start: . 'c' DISABLED 6 start: . empty_c1 'c' -> 6 start: empty_c1 . 'c' => 11 empty_c1: %empty . <- 0 $accept: . start $end 7 start: . empty_c2 'c' -> 7 start: empty_c2 . 'c' => 12 empty_c2: %empty . <- 0 $accept: . start $end 8 start: . empty_c3 'c' -> 8 start: empty_c3 . 'c' => 13 empty_c3: %empty . <- 0 $accept: . start $end 9 empty_a: %empty . <- 2 start: . empty_a 'a' 10 empty_b: %empty . <- 4 start: . empty_b 'b' 11 empty_c1: %empty . <- 6 start: . empty_c1 'c' 12 empty_c2: %empty . <- 7 start: . empty_c2 'c' 13 empty_c3: %empty . <- 8 start: . empty_c3 'c' State 1: 0 $accept: start . $end -> 0 $accept: start $end . <- 0 $accept: . start $end State 2: 2 start: empty_a . 'a' -> 2 start: empty_a 'a' . <- 2 start: . empty_a 'a' State 3: 4 start: empty_b . 'b' -> 4 start: empty_b 'b' . <- 4 start: . empty_b 'b' State 4: 6 start: empty_c1 . 'c' -> 6 start: empty_c1 'c' . <- 6 start: . empty_c1 'c' State 5: 7 start: empty_c2 . 'c' -> 7 start: empty_c2 'c' . <- 7 start: . empty_c2 'c' State 6: 8 start: empty_c3 . 'c' -> 8 start: empty_c3 'c' . <- 8 start: . empty_c3 'c' State 7: 0 $accept: start $end . <- 0 $accept: start . $end State 8: 2 start: empty_a 'a' . <- 2 start: empty_a . 'a' State 9: 4 start: empty_b 'b' . <- 4 start: empty_b . 'b' State 10: 6 start: empty_c1 'c' . <- 6 start: empty_c1 . 'c' State 11: 7 start: empty_c2 'c' . <- 7 start: empty_c2 . 'c' State 12: 8 start: empty_c3 'c' . <- 8 start: empty_c3 . 'c' FIRSTS $accept firsts 'a' 'b' 'c' start firsts 'a' 'b' 'c' empty_a firsts empty_b firsts empty_c1 firsts empty_c2 firsts empty_c3 firsts input.y: error: 1 reduce/reduce conflict [-Werror=conflicts-rr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples input.y:4.5-7: error: rule useless in parser due to conflicts [-Werror=other] 4 | 'a' | ^~~ input.y:6.5-7: error: rule useless in parser due to conflicts [-Werror=other] 6 | | 'b' | ^~~ input.y:8.5-7: error: rule useless in parser due to conflicts [-Werror=other] 8 | | 'c' | ^~~ input.y:13.10-18: error: rule useless in parser due to conflicts [-Werror=other] 13 | empty_a: %prec 'a' ; | ^~~~~~~~~ input.y:14.10-18: error: rule useless in parser due to conflicts [-Werror=other] 14 | empty_b: %prec 'b' ; | ^~~~~~~~~ input.y:15.11-19: error: rule useless in parser due to conflicts [-Werror=other] 15 | empty_c1: %prec 'c' ; | ^~~~~~~~~ input.y:16.11-19: error: rule useless in parser due to conflicts [-Werror=other] 16 | empty_c2: %prec 'c' ; | ^~~~~~~~~ input.y:17.11-19: error: rule useless in parser due to conflicts [-Werror=other] 17 | empty_c3: %prec 'c' ; | ^~~~~~~~~ REDUCE ITEM PATH: 0 $accept: . start $end 7 start: . empty_c2 'c' 12 empty_c2: %empty . CONFLICT 1 (size 1 depth 0 rc 2) 12 empty_c2: %empty . 12 empty_c2: %empty . . CONFLICT 2 (size 1 depth 0 rc 2) 13 empty_c3: %empty . 13 empty_c3: %empty . . CONFLICT 1 (size 2 depth -1 rc 4) 7 start: . empty_c2 'c' 7 start: empty_c2 . 'c' empty_c2 `-> 12: %empty . CONFLICT 2 (size 1 depth 0 rc 3) 13 empty_c3: %empty . 13 empty_c3: %empty . . CONFLICT 1 (size 1 depth 0 rc 3) 12 empty_c2: %empty . 12 empty_c2: %empty . . CONFLICT 2 (size 2 depth -1 rc 2) 8 start: . empty_c3 'c' 8 start: empty_c3 . 'c' empty_c3 `-> 13: %empty . CONFLICT 1 (size 2 depth -1 rc 3) 7 start: . empty_c2 'c' 7 start: empty_c2 . 'c' empty_c2 `-> 12: %empty . CONFLICT 2 (size 2 depth -1 rc 2) 8 start: . empty_c3 'c' 8 start: empty_c3 . 'c' empty_c3 `-> 13: %empty . CONFLICT 1 (size 3 depth -1 rc 2) 7 start: . empty_c2 'c' 7 start: empty_c2 'c' . empty_c2 `-> 12: %empty . CONFLICT 2 (size 3 depth -1 rc 2) 8 start: . empty_c3 'c' 8 start: empty_c3 'c' . empty_c3 `-> 13: %empty . CONFLICT 1 (size 2 depth -1 rc 4) 0 $accept: . start $end 0 $accept: start . $end start `-> 7: empty_c2 'c' `-> 12: %empty . CONFLICT 2 (size 3 depth -1 rc 3) 8 start: . empty_c3 'c' 8 start: empty_c3 'c' . empty_c3 `-> 13: %empty . CONFLICT 1 (size 3 depth -1 rc 3) 7 start: . empty_c2 'c' 7 start: empty_c2 'c' . empty_c2 `-> 12: %empty . CONFLICT 2 (size 2 depth -1 rc 2) 0 $accept: . start $end 0 $accept: start . $end start `-> 8: empty_c3 'c' `-> 13: %empty . CONFLICT 1 (size 2 depth -1 rc 3) 0 $accept: . start $end 0 $accept: start . $end start `-> 7: empty_c2 'c' `-> 12: %empty . CONFLICT 2 (size 2 depth -1 rc 2) 0 $accept: . start $end 0 $accept: start . $end start `-> 8: empty_c3 'c' `-> 13: %empty . ./conflicts.at:1959: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:1959: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --trace=cex -fcaret --report=all -o input.c input.y --warnings=error ./conflicts.at:1651: cat input.output ./conflicts.at:1836: cat input.y >> input-keep.y ./conflicts.at:1838: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret input-keep.y ./conflicts.at:1959: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --trace=cex -fcaret --report=all -o input.c input.y -Wnone,none -Werror --trace=none ./conflicts.at:1838: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input-keep.y -Werror ./conflicts.at:1959: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --trace=cex -fcaret --report=all -o input.c input.y --warnings=none -Werror --trace=none stderr: input-keep.y: error: 2 shift/reduce conflicts [-Werror=conflicts-sr] input-keep.y: error: 2 reduce/reduce conflicts [-Werror=conflicts-rr] input-keep.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples input-keep.y:22.4: error: rule useless in parser due to conflicts [-Werror=other] input-keep.y:26.16: error: rule useless in parser due to conflicts [-Werror=other] input-keep.y:32.5-7: error: rule useless in parser due to conflicts [-Werror=other] input-keep.y:33.4: error: rule useless in parser due to conflicts [-Werror=other] ./conflicts.at:1838: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:1838: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input-keep.y --warnings=error ./conflicts.at:2239: cat input.output | sed -n '/^State 0$/,/^State 1$/p' 256. conflicts.at:1935: ok ./conflicts.at:1838: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input-keep.y -Wnone,none -Werror --trace=none 257. conflicts.at:2299: testing %expect-rr non GLR ... ./conflicts.at:2307: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret 1.y ./conflicts.at:1838: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input-keep.y --warnings=none -Werror --trace=none ./conflicts.at:2307: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret 1.y -Werror stderr: 1.y: error: %expect-rr applies only to GLR parsers [-Werror=other] ./conflicts.at:2307: sed 's,.*/$,,' stderr 1>&2 254. conflicts.at:1592: ok ./conflicts.at:2307: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret 1.y --warnings=error 258. conflicts.at:2331: testing -W versus %expect and %expect-rr ... ./conflicts.at:2354: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret sr-rr.y ./conflicts.at:2307: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret 1.y -Wnone,none -Werror --trace=none ./conflicts.at:2354: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret sr-rr.y -Werror ./conflicts.at:2307: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret 1.y --warnings=none -Werror --trace=none stderr: sr-rr.y: error: 1 shift/reduce conflict [-Werror=conflicts-sr] sr-rr.y: error: 1 reduce/reduce conflict [-Werror=conflicts-rr] sr-rr.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples ./conflicts.at:2317: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret 2.y ./conflicts.at:2354: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:2354: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret sr-rr.y --warnings=error ./conflicts.at:2317: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret 2.y -Werror ./conflicts.at:2354: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret sr-rr.y -Wnone,none -Werror --trace=none stderr: 2.y: error: %expect-rr applies only to GLR parsers [-Werror=other] 2.y: error: 1 reduce/reduce conflict [-Werror=conflicts-rr] 2.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples 2.y:3.12-14: error: rule useless in parser due to conflicts [-Werror=other] ./conflicts.at:2317: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:2317: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret 2.y --warnings=error ./conflicts.at:2354: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret sr-rr.y --warnings=none -Werror --trace=none ./conflicts.at:2317: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret 2.y -Wnone,none -Werror --trace=none ./conflicts.at:2359: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-conflicts-sr sr-rr.y ./conflicts.at:2317: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret 2.y --warnings=none -Werror --trace=none ./conflicts.at:2359: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wno-conflicts-sr sr-rr.y -Werror 257. conflicts.at:2299: ok stderr: sr-rr.y: error: 1 reduce/reduce conflict [-Werror=conflicts-rr] sr-rr.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples ./conflicts.at:2359: sed 's,.*/$,,' stderr 1>&2 ./conflicts.at:2359: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wno-conflicts-sr sr-rr.y --warnings=error 259. counterexample.at:43: testing Unifying S/R ... ./counterexample.at:55: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2359: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wno-conflicts-sr sr-rr.y -Wnone,none -Werror --trace=none stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token B [-Wcounterexamples] Example: A . B C Shift derivation s `-> 2: y c `-> 8: A . B `-> 4: C Reduce derivation s `-> 1: a x `-> 3: A . `-> 6: B C input.y:4.4: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:55: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g;s/ *$//;' stderr ./counterexample.at:55: YYFLAT=1; export YYFLAT;COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2359: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wno-conflicts-sr sr-rr.y --warnings=none -Werror --trace=none stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token B [-Wcounterexamples] Example A . B C Shift derivation s -> [ y -> [ A . B ] c -> [ C ] ] Reduce derivation s -> [ a -> [ A . ] x -> [ B C ] ] input.y:4.4: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:55: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g' stderr 259. counterexample.at:43: ok ./conflicts.at:2363: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-conflicts-rr sr-rr.y 260. counterexample.at:83: testing Deep Unifying S/R ... ./counterexample.at:95: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2363: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wno-conflicts-rr sr-rr.y -Werror stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token B [-Wcounterexamples] Example: A . B C Shift derivation s `-> 1: ac `-> 3: A ac C `-> 4: b `-> 5: . B Reduce derivation s `-> 2: a bc `-> 7: A . `-> 10: B C input.y: warning: shift/reduce conflict on token B [-Wcounterexamples] Example: A A . B B C C Shift derivation s `-> 1: ac `-> 3: A ac C `-> 3: A ac C `-> 4: b `-> 6: . b `-> 5: B B Reduce derivation s `-> 2: a bc `-> 8: A a `-> 9: B bc C `-> 7: A . `-> 10: B C input.y:6.4: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:95: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g;s/ *$//;' stderr ./counterexample.at:95: YYFLAT=1; export YYFLAT;COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y stderr: sr-rr.y: error: 1 shift/reduce conflict [-Werror=conflicts-sr] ./conflicts.at:2363: sed 's,.*/$,,' stderr 1>&2 stderr: ./conflicts.at:2363: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wno-conflicts-rr sr-rr.y --warnings=error input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token B [-Wcounterexamples] Example A . B C Shift derivation s -> [ ac -> [ A ac -> [ b -> [ . B ] ] C ] ] Reduce derivation s -> [ a -> [ A . ] bc -> [ B C ] ] input.y: warning: shift/reduce conflict on token B [-Wcounterexamples] Example A A . B B C C Shift derivation s -> [ ac -> [ A ac -> [ A ac -> [ b -> [ . b -> [ B B ] ] ] C ] C ] ] Reduce derivation s -> [ a -> [ A a -> [ A . ] ] bc -> [ B bc -> [ B C ] C ] ] input.y:6.4: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:95: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g' stderr 260. counterexample.at:83: ok 261. counterexample.at:144: testing S/R Conflict with Nullable Symbols ... ./counterexample.at:157: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2363: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wno-conflicts-rr sr-rr.y -Wnone,none -Werror --trace=none stderr: input.y: warning: 2 shift/reduce conflicts [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token B [-Wcounterexamples] Example: A . B Shift derivation s `-> 2: A xby `-> 9: . B Reduce derivation s `-> 1: ax by `-> 3: A x `-> 6: B y `-> 4: %empty . `-> 6: %empty input.y: warning: shift/reduce conflict on token B [-Wcounterexamples] First example: A X . B Y $end Shift derivation $accept `-> 0: s $end `-> 2: A xby `-> 10: X xby Y `-> 9: . B Second example: A X . B y $end Reduce derivation $accept `-> 0: s $end `-> 1: ax by `-> 3: A x `-> 6: B y `-> 5: X x `-> 4: %empty . input.y:5.4-9: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:157: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g;s/ *$//;' stderr ./counterexample.at:157: YYFLAT=1; export YYFLAT;COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2363: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wno-conflicts-rr sr-rr.y --warnings=none -Werror --trace=none stderr: input.y: warning: 2 shift/reduce conflicts [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token B [-Wcounterexamples] Example A . B Shift derivation s -> [ A xby -> [ . B ] ] Reduce derivation s -> [ ax -> [ A x -> [ . ] ] by -> [ B y -> [ ] ] ] input.y: warning: shift/reduce conflict on token B [-Wcounterexamples] First example A X . B Y $end Shift derivation $accept -> [ s -> [ A xby -> [ X xby -> [ . B ] Y ] ] $end ] Second example A X . B y $end Reduce derivation $accept -> [ s -> [ ax -> [ A x -> [ X x -> [ . ] ] ] by -> [ B y ] ] $end ] input.y:5.4-9: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:157: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g' stderr 261. counterexample.at:144: ok ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file 262. counterexample.at:207: testing Non-unifying Ambiguous S/R ... ./counterexample.at:220: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token C [-Wcounterexamples] First example: B . C $end Shift derivation $accept `-> 0: g $end `-> 2: x `-> 6: bc `-> 9: B . C Second example: B . C D $end Reduce derivation $accept `-> 0: g $end `-> 2: x `-> 5: b cd `-> 7: B . `-> 8: C D input.y:6.4: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:220: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g;s/ *$//;' stderr ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./counterexample.at:220: YYFLAT=1; export YYFLAT;COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2417: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wnone $file stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token C [-Wcounterexamples] First example B . C $end Shift derivation $accept -> [ g -> [ x -> [ bc -> [ B . C ] ] ] $end ] Second example B . C D $end Reduce derivation $accept -> [ g -> [ x -> [ b -> [ B . ] cd -> [ C D ] ] ] $end ] input.y:6.4: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:220: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g' stderr 262. counterexample.at:207: ok 263. counterexample.at:254: testing Non-unifying Unambiguous S/R ... ./counterexample.at:265: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2418: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Werror $file stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token A [-Wcounterexamples] First example: A . A B $end Shift derivation $accept `-> 0: s $end `-> 1: t `-> 4: y `-> 6: A . A B Second example: A . A $end Reduce derivation $accept `-> 0: s $end `-> 2: s t `-> 1: t `-> 3: x `-> 3: x `-> 5: A `-> 5: A . ./counterexample.at:265: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g;s/ *$//;' stderr ./counterexample.at:265: YYFLAT=1; export YYFLAT;COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token A [-Wcounterexamples] First example A . A B $end Shift derivation $accept -> [ s -> [ t -> [ y -> [ A . A B ] ] ] $end ] Second example A . A $end Reduce derivation $accept -> [ s -> [ s -> [ t -> [ x -> [ A . ] ] ] t -> [ x -> [ A ] ] ] $end ] ./counterexample.at:265: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g' stderr ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file 263. counterexample.at:254: ok ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file 264. counterexample.at:298: testing S/R after first token ... ./counterexample.at:314: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file stderr: input.y: warning: 2 shift/reduce conflicts [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token A [-Wcounterexamples] Example: b . A X X Y Shift derivation a `-> 2: s `-> 7: b . xx y `-> 9: A X X `-> 11: Y Reduce derivation a `-> 1: r t `-> 3: b . `-> 6: A x xy `-> 8: X `-> 10: X Y input.y: warning: shift/reduce conflict on token X [-Wcounterexamples] First example: A X . X Shift derivation a `-> 1: t `-> 5: A xx `-> 9: X . X Second example: X . X xy Reduce derivation a `-> 1: x t `-> 8: X . `-> 6: X xy input.y:4.4: warning: rule useless in parser due to conflicts [-Wother] input.y:8.4: warning: rule useless in parser due to conflicts [-Wother] ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./counterexample.at:314: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g;s/ *$//;' stderr ./counterexample.at:314: YYFLAT=1; export YYFLAT;COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2417: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wnone $file stderr: input.y: warning: 2 shift/reduce conflicts [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token A [-Wcounterexamples] Example b . A X X Y Shift derivation a -> [ s -> [ b . xx -> [ A X X ] y -> [ Y ] ] ] Reduce derivation a -> [ r -> [ b . ] t -> [ A x -> [ X ] xy -> [ X Y ] ] ] input.y: warning: shift/reduce conflict on token X [-Wcounterexamples] First example A X . X Shift derivation a -> [ t -> [ A xx -> [ X . X ] ] ] Second example X . X xy Reduce derivation a -> [ x -> [ X . ] t -> [ X xy ] ] input.y:4.4: warning: rule useless in parser due to conflicts [-Wother] input.y:8.4: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:314: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g' stderr 264. counterexample.at:298: ok ./conflicts.at:2418: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Werror $file 265. counterexample.at:363: testing Unifying R/R counterexample ... ./counterexample.at:372: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2417: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wnone $file stderr: In file included from /usr/include/c++/12/vector:70, from input.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2102:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at input.cc:1941:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2367:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2102:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1927:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at input.cc:2770:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2102:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1927:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2753:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at input.cc:2741:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./conflicts.at:1096: $PREPARSER ./input stderr: syntax error ./conflicts.at:1096: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 237. conflicts.at:1096: ok stderr: input.y: warning: 1 reduce/reduce conflict [-Wconflicts-rr] input.y: warning: reduce/reduce conflict on token $end [-Wcounterexamples] Example: A b . First reduce derivation a `-> 1: A b . Second reduce derivation a `-> 1: A b `-> 3: b . input.y:4.9: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:372: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g;s/ *$//;' stderr ./counterexample.at:372: YYFLAT=1; export YYFLAT;COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y 266. counterexample.at:399: testing Non-unifying R/R LR(1) conflict ... ./counterexample.at:409: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2418: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Werror $file stderr: input.y: warning: 1 reduce/reduce conflict [-Wconflicts-rr] input.y: warning: reduce/reduce conflict on token $end [-Wcounterexamples] Example A b . First reduce derivation a -> [ A b . ] Second reduce derivation a -> [ A b -> [ b . ] ] input.y:4.9: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:372: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g' stderr 265. counterexample.at:363: ok stderr: input.y: warning: 2 reduce/reduce conflicts [-Wconflicts-rr] input.y: warning: reduce/reduce conflict on tokens A, C [-Wcounterexamples] First example: D . A $end First reduce derivation $accept `-> 0: s $end `-> 1: a A `-> 5: D . Second example: B D . A $end Second reduce derivation $accept `-> 0: s $end `-> 4: B b A `-> 6: D . input.y:5.4: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:409: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g;s/ *$//;' stderr ./counterexample.at:409: YYFLAT=1; export YYFLAT;COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file 267. counterexample.at:441: testing Non-unifying R/R LR(2) conflict ... ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./counterexample.at:451: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file stderr: input.y: warning: 2 reduce/reduce conflicts [-Wconflicts-rr] input.y: warning: reduce/reduce conflict on tokens A, C [-Wcounterexamples] First example D . A $end First reduce derivation $accept -> [ s -> [ a -> [ D . ] A ] $end ] Second example B D . A $end Second reduce derivation $accept -> [ s -> [ B b -> [ D . ] A ] $end ] input.y:5.4: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:409: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g' stderr ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file 266. counterexample.at:399: ok ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file 268. counterexample.at:488: testing Cex Search Prepend ... ./counterexample.at:499: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2417: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wnone $file stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token B [-Wcounterexamples] Example: N A . B C Shift derivation s `-> 1: n `-> 6: N b `-> 8: A . B C Reduce derivation s `-> 2: n C `-> 5: N a B `-> 7: A . input.y: warning: shift/reduce conflict on token B [-Wcounterexamples] Example: N N A . B D C Shift derivation s `-> 1: n `-> 4: N n C `-> 6: N b `-> 9: A . B D Reduce derivation s `-> 2: n C `-> 3: N n D `-> 5: N a B `-> 7: A . input.y:5.4: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:499: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g;s/ *$//;' stderr ./counterexample.at:499: YYFLAT=1; export YYFLAT;COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2418: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Werror $file stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token B [-Wcounterexamples] Example N A . B C Shift derivation s -> [ n -> [ N b -> [ A . B C ] ] ] Reduce derivation s -> [ n -> [ N a -> [ A . ] B ] C ] input.y: warning: shift/reduce conflict on token B [-Wcounterexamples] Example N N A . B D C Shift derivation s -> [ n -> [ N n -> [ N b -> [ A . B D ] ] C ] ] Reduce derivation s -> [ n -> [ N n -> [ N a -> [ A . ] B ] D ] C ] input.y:5.4: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:499: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g' stderr 268. counterexample.at:488: ok ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file 269. counterexample.at:550: testing R/R cex with prec ... ./counterexample.at:562: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2417: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wnone $file stderr: input.y: warning: 4 reduce/reduce conflicts [-Wconflicts-rr] input.y: warning: reduce/reduce conflict on tokens b, c [-Wcounterexamples] Example: B . b c First reduce derivation S `-> 1: B C `-> 6: A b A `-> 7: A c A `-> 3: B . `-> 6: %empty `-> 7: %empty `-> 7: %empty Second reduce derivation S `-> 1: B C `-> 7: A c A `-> 3: B `-> 7: %empty `-> 6: A b A `-> 5: %empty . `-> 6: %empty input.y: warning: reduce/reduce conflict on tokens b, c [-Wcounterexamples] Example: C . c b First reduce derivation S `-> 2: C B `-> 7: A c A `-> 6: A b A `-> 4: C . `-> 7: %empty `-> 6: %empty `-> 6: %empty Second reduce derivation S `-> 2: C B `-> 6: A b A `-> 4: C `-> 6: %empty `-> 7: A c A `-> 5: %empty . `-> 7: %empty ./counterexample.at:562: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g;s/ *$//;' stderr ./counterexample.at:562: YYFLAT=1; export YYFLAT;COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2418: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file stderr: input.y: warning: 4 reduce/reduce conflicts [-Wconflicts-rr] input.y: warning: reduce/reduce conflict on tokens b, c [-Wcounterexamples] Example B . b c First reduce derivation S -> [ B -> [ A -> [ B . ] b A -> [ ] ] C -> [ A -> [ ] c A -> [ ] ] ] Second reduce derivation S -> [ B C -> [ A -> [ B -> [ A -> [ . ] b A -> [ ] ] ] c A -> [ ] ] ] input.y: warning: reduce/reduce conflict on tokens b, c [-Wcounterexamples] Example C . c b First reduce derivation S -> [ C -> [ A -> [ C . ] c A -> [ ] ] B -> [ A -> [ ] b A -> [ ] ] ] Second reduce derivation S -> [ C B -> [ A -> [ C -> [ A -> [ . ] c A -> [ ] ] ] b A -> [ ] ] ] ./counterexample.at:562: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g' stderr ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file 269. counterexample.at:550: ok ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file 270. counterexample.at:610: testing Null nonterminals ... ./counterexample.at:621: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file ./conflicts.at:2444: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wnone $file ./conflicts.at:2445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Werror $file 258. conflicts.at:2331: ok 271. counterexample.at:797: testing Non-unifying Prefix Share ... ./counterexample.at:810: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token J [-Wcounterexamples] time limit exceeded: 6.000000 First example: H i . J K $end Shift derivation $accept `-> 0: a $end `-> 2: H i `-> 4: i . J K Second example: H i . J $end Reduce derivation $accept `-> 0: s $end `-> 1: a J `-> 2: H i . input.y:4.4-6: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:451: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g;s/ *$//;' stderr ./counterexample.at:451: YYFLAT=1; export YYFLAT;COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token J [-Wcounterexamples] Example: H i J . J J Shift derivation s `-> 2: a J `-> 3: H i J . J Reduce derivation s `-> 1: a `-> 3: H i J J `-> 5: i J . input.y:5.13-15: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:810: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g;s/ *$//;' stderr ./counterexample.at:810: YYFLAT=1; export YYFLAT;COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token J [-Wcounterexamples] Example H i J . J J Shift derivation s -> [ a -> [ H i J . J ] J ] Reduce derivation s -> [ a -> [ H i -> [ i J . ] J J ] ] input.y:5.13-15: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:810: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g' stderr 271. counterexample.at:797: ok 272. counterexample.at:842: testing Deep Null Unifying ... ./counterexample.at:854: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token D [-Wcounterexamples] Example: A a . D Shift derivation s `-> 1: A a d `-> 6: . D Reduce derivation s `-> 2: A a a d `-> 3: b `-> 6: D `-> 4: c `-> 5: %empty . ./counterexample.at:854: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g;s/ *$//;' stderr ./counterexample.at:854: YYFLAT=1; export YYFLAT;COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token D [-Wcounterexamples] Example A a . D Shift derivation s -> [ A a d -> [ . D ] ] Reduce derivation s -> [ A a a -> [ b -> [ c -> [ . ] ] ] d -> [ D ] ] ./counterexample.at:854: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g' stderr 272. counterexample.at:842: ok 273. counterexample.at:884: testing Deep Null Non-unifying ... ./counterexample.at:896: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token D [-Wcounterexamples] First example: A a . D $end Shift derivation $accept `-> 0: s $end `-> 1: A a d `-> 6: . D Second example: A a . D E $end Reduce derivation $accept `-> 0: s $end `-> 2: A a a d E `-> 3: b `-> 6: D `-> 4: c `-> 5: %empty . ./counterexample.at:896: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g;s/ *$//;' stderr ./counterexample.at:896: YYFLAT=1; export YYFLAT;COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token D [-Wcounterexamples] First example A a . D $end Shift derivation $accept -> [ s -> [ A a d -> [ . D ] ] $end ] Second example A a . D E $end Reduce derivation $accept -> [ s -> [ A a a -> [ b -> [ c -> [ . ] ] ] d -> [ D ] E ] $end ] ./counterexample.at:896: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g' stderr 273. counterexample.at:884: ok 274. synclines.at:194: testing Prologue syncline ... ./synclines.at:194: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./synclines.at:194: $CC $CFLAGS $CPPFLAGS -c syncline.c stderr: syncline.c: In function 'foo': syncline.c:4:2: error: #error "4" 4 | #error "4" | ^~~~~ ./synclines.at:194: "$PERL" -p -0777 - stderr <<\EOF || exit 77 # Remove left-hand margin. s/^[\d ]{6}\| //gm; # 1. Remove useless lines. # distcc clutter. s/^distcc\[\d+\] .*\n//gm; # c vs. c++. s/^clang: warning: treating 'c' input as 'c\+\+'.*\n//gm; # Function context. s/^[^:]*: In function '[^']+':\n//gm; # Caret error (with possible '~' to underline). s/^ *#error.*\n *\^~*\n//gm; # Number of errors. s/^1 error generated\.\n//gm; # 2. Normalize the lines we kept. # xlc messages. Remove also error identifier (e.g., "1540-0218 (S)"). s/^"(.*?)", line ([\w.]*): \d+-\d+ \(.\) /$1:$2: /gm; # Remove column. s/^([^:]+:\d+)[.:][^:]+:(.+)$/$1:$2/gm; # Map all combinations of "error: " and "#error: " to "#error ". s/^([^:]+:\d+):( |#error|error|:)+/$1: #error /gm; EOF stdout: syncline.c:4: #error "4" ./synclines.at:194: test "`cat stdout`" = 'syncline.c:4: #error "4"' || exit 77 ./synclines.at:194: $CC $CFLAGS $CPPFLAGS -c input.c stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: shift/reduce conflict on token J [-Wcounterexamples] time limit exceeded: 6.000000 First example H i . J K $end Shift derivation $accept -> [ a -> [ H i -> [ i . J K ] ] $end ] Second example H i . J $end Reduce derivation $accept -> [ s -> [ a -> [ H i . ] J ] $end ] input.y:4.4-6: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:451: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g' stderr stderr: input.y:2:2: error: #error "2" 2 | #error "2" | ^~~~~ ./synclines.at:194: "$PERL" -p -0777 - stderr <<\EOF || exit 77 # Remove left-hand margin. s/^[\d ]{6}\| //gm; # 1. Remove useless lines. # distcc clutter. s/^distcc\[\d+\] .*\n//gm; # c vs. c++. s/^clang: warning: treating 'c' input as 'c\+\+'.*\n//gm; # Function context. s/^[^:]*: In function '[^']+':\n//gm; # Caret error (with possible '~' to underline). s/^ *#error.*\n *\^~*\n//gm; # Number of errors. s/^1 error generated\.\n//gm; # 2. Normalize the lines we kept. # xlc messages. Remove also error identifier (e.g., "1540-0218 (S)"). s/^"(.*?)", line ([\w.]*): \d+-\d+ \(.\) /$1:$2: /gm; # Remove column. s/^([^:]+:\d+)[.:][^:]+:(.+)$/$1:$2/gm; # Map all combinations of "error: " and "#error: " to "#error ". s/^([^:]+:\d+):( |#error|error|:)+/$1: #error /gm; EOF 267. counterexample.at:441: ok stdout: input.y:2: #error "2" ./synclines.at:194: cat stdout 274. synclines.at:194: ok 275. synclines.at:214: testing %union syncline ... ./synclines.at:214: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 276. synclines.at:237: testing %union name syncline ... ./synclines.at:253: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./synclines.at:254: $CC $CFLAGS $CPPFLAGS -c syncline.c ./synclines.at:214: $CC $CFLAGS $CPPFLAGS -c syncline.c stderr: stderr: syncline.c: In function 'foo': syncline.c:4:2: error: #error "4" 4 | #error "4" | ^~~~~ ./synclines.at:254: "$PERL" -p -0777 - stderr <<\EOF || exit 77 # Remove left-hand margin. s/^[\d ]{6}\| //gm; # 1. Remove useless lines. # distcc clutter. s/^distcc\[\d+\] .*\n//gm; # c vs. c++. s/^clang: warning: treating 'c' input as 'c\+\+'.*\n//gm; # Function context. s/^[^:]*: In function '[^']+':\n//gm; # Caret error (with possible '~' to underline). s/^ *#error.*\n *\^~*\n//gm; # Number of errors. s/^1 error generated\.\n//gm; # 2. Normalize the lines we kept. # xlc messages. Remove also error identifier (e.g., "1540-0218 (S)"). s/^"(.*?)", line ([\w.]*): \d+-\d+ \(.\) /$1:$2: /gm; # Remove column. s/^([^:]+:\d+)[.:][^:]+:(.+)$/$1:$2/gm; # Map all combinations of "error: " and "#error: " to "#error ". s/^([^:]+:\d+):( |#error|error|:)+/$1: #error /gm; EOF syncline.c: In function 'foo': syncline.c:4:2: error: #error "4" 4 | #error "4" | ^~~~~ ./synclines.at:214: "$PERL" -p -0777 - stderr <<\EOF || exit 77 # Remove left-hand margin. s/^[\d ]{6}\| //gm; # 1. Remove useless lines. # distcc clutter. s/^distcc\[\d+\] .*\n//gm; # c vs. c++. s/^clang: warning: treating 'c' input as 'c\+\+'.*\n//gm; # Function context. s/^[^:]*: In function '[^']+':\n//gm; # Caret error (with possible '~' to underline). s/^ *#error.*\n *\^~*\n//gm; # Number of errors. s/^1 error generated\.\n//gm; # 2. Normalize the lines we kept. # xlc messages. Remove also error identifier (e.g., "1540-0218 (S)"). s/^"(.*?)", line ([\w.]*): \d+-\d+ \(.\) /$1:$2: /gm; # Remove column. s/^([^:]+:\d+)[.:][^:]+:(.+)$/$1:$2/gm; # Map all combinations of "error: " and "#error: " to "#error ". s/^([^:]+:\d+):( |#error|error|:)+/$1: #error /gm; EOF stdout: syncline.c:4: #error "4" ./synclines.at:254: test "`cat stdout`" = 'syncline.c:4: #error "4"' || exit 77 stdout: syncline.c:4: #error "4" ./synclines.at:214: test "`cat stdout`" = 'syncline.c:4: #error "4"' || exit 77 ./synclines.at:254: $CC $CFLAGS $CPPFLAGS -c input.c ./synclines.at:214: $CC $CFLAGS $CPPFLAGS -c input.c stderr: input.y:2:2: error: #error "2" 2 | #error "2" | ^~~~~ ./synclines.at:214: "$PERL" -p -0777 - stderr <<\EOF || exit 77 # Remove left-hand margin. s/^[\d ]{6}\| //gm; # 1. Remove useless lines. # distcc clutter. s/^distcc\[\d+\] .*\n//gm; # c vs. c++. s/^clang: warning: treating 'c' input as 'c\+\+'.*\n//gm; # Function context. s/^[^:]*: In function '[^']+':\n//gm; # Caret error (with possible '~' to underline). s/^ *#error.*\n *\^~*\n//gm; # Number of errors. s/^1 error generated\.\n//gm; # 2. Normalize the lines we kept. # xlc messages. Remove also error identifier (e.g., "1540-0218 (S)"). s/^"(.*?)", line ([\w.]*): \d+-\d+ \(.\) /$1:$2: /gm; # Remove column. s/^([^:]+:\d+)[.:][^:]+:(.+)$/$1:$2/gm; # Map all combinations of "error: " and "#error: " to "#error ". s/^([^:]+:\d+):( |#error|error|:)+/$1: #error /gm; EOF stderr: stdout: input.y:1:7: error: expected '{' before 'break' 1 | %union break | ^~~~~ input.y:1:15: error: expected '{' before 'break' 1 | %union break | ^ input.y:7:8: error: unknown type name 'YYSTYPE' 7 | int yylex (void); | ^~~~~~~ input.y:8:1: error: expected identifier or '(' before '%' token 8 | %} | ^ input.c:273:9: error: unknown type name 'yytype_int8' 273 | typedef yytype_int8 yy_state_t; | ^~~~~~~~~~~ input.c:429:3: error: unknown type name 'YYSTYPE' 429 | YYSTYPE yyvs_alloc; | ^~~~~~~ input.c:508:14: error: unknown type name 'yytype_int8' 508 | static const yytype_int8 yytranslate[] = | ^~~~~~~~~~~ input.c:581:14: error: unknown type name 'yytype_int8' 581 | static const yytype_int8 yypact[] = | ^~~~~~~~~~~ input.c:589:14: error: unknown type name 'yytype_int8' 589 | static const yytype_int8 yydefact[] = | ^~~~~~~~~~~ input.c:595:14: error: unknown type name 'yytype_int8' 595 | static const yytype_int8 yypgoto[] = | ^~~~~~~~~~~ input.c:601:14: error: unknown type name 'yytype_int8' 601 | static const yytype_int8 yydefgoto[] = | ^~~~~~~~~~~ input.c:609:14: error: unknown type name 'yytype_int8' 609 | static const yytype_int8 yytable[] = | ^~~~~~~~~~~ input.c:614:14: error: unknown type name 'yytype_int8' 614 | static const yytype_int8 yycheck[] = | ^~~~~~~~~~~ input.c:621:14: error: unknown type name 'yytype_int8' 621 | static const yytype_int8 yystos[] = | ^~~~~~~~~~~ input.c:627:14: error: unknown type name 'yytype_int8' 627 | static const yytype_int8 yyr1[] = | ^~~~~~~~~~~ input.c:633:14: error: unknown type name 'yytype_int8' 633 | static const yytype_int8 yyr2[] = | ^~~~~~~~~~~ input.c:828:37: error: unknown type name 'YYSTYPE' 828 | yysymbol_kind_t yykind, YYSTYPE *yyvaluep) | ^~~~~~~ input.c:322:5: error: expected end of line before 'push' 322 | _Pragma ("GCC diagnostic push") \ | ^~~~~~~ input.c:835:3: note: in expansion of macro 'YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN' 835 | YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.c:323:5: error: expected end of line before 'ignored' 323 | _Pragma ("GCC diagnostic ignored \"-Wuninitialized\"") \ | ^~~~~~~ input.c:835:3: note: in expansion of macro 'YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN' 835 | YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.c:324:5: error: expected end of line before 'ignored' 324 | _Pragma ("GCC diagnostic ignored \"-Wmaybe-uninitialized\"") | ^~~~~~~ input.c:835:3: note: in expansion of macro 'YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN' 835 | YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.c:327:5: error: expected end of line before 'pop' 327 | _Pragma ("GCC diagnostic pop") | ^~~~~~~ input.c:837:3: note: in expansion of macro 'YY_IGNORE_MAYBE_UNINITIALIZED_END' 837 | YY_IGNORE_MAYBE_UNINITIALIZED_END | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.c:845:1: error: unknown type name 'YYSTYPE' 845 | YYSTYPE yylval; | ^~~~~~~ input.c: In function 'yyparse': input.c:875:5: error: unknown type name 'YYSTYPE' 875 | YYSTYPE yyvsa[YYINITDEPTH]; | ^~~~~~~ input.c:876:5: error: unknown type name 'YYSTYPE' 876 | YYSTYPE *yyvs = yyvsa; | ^~~~~~~ input.c:877:5: error: unknown type name 'YYSTYPE' 877 | YYSTYPE *yyvsp = yyvs; | ^~~~~~~ input.c:886:3: error: unknown type name 'YYSTYPE' 886 | YYSTYPE yyval; | ^~~~~~~ input.c:438:48: error: 'YYSTYPE' undeclared (first use in this function) 438 | ((N) * (YYSIZEOF (yy_state_t) + YYSIZEOF (YYSTYPE)) \ | ^~~~~~~ input.c:75:40: note: in definition of macro 'YY_CAST' 75 | # define YY_CAST(Type, Val) ((Type) (Val)) | ^~~ input.c:962:35: note: in expansion of macro 'YY_CAST' 962 | YYSTACK_ALLOC (YY_CAST (YYSIZE_T, YYSTACK_BYTES (yystacksize)))); | ^~~~~~~ input.c:269:21: note: in expansion of macro 'YY_CAST' 269 | #define YYSIZEOF(X) YY_CAST (YYPTRDIFF_T, sizeof (X)) | ^~~~~~~ input.c:438:38: note: in expansion of macro 'YYSIZEOF' 438 | ((N) * (YYSIZEOF (yy_state_t) + YYSIZEOF (YYSTYPE)) \ | ^~~~~~~~ input.c:962:54: note: in expansion of macro 'YYSTACK_BYTES' 962 | YYSTACK_ALLOC (YY_CAST (YYSIZE_T, YYSTACK_BYTES (yystacksize)))); | ^~~~~~~~~~~~~ input.c:438:48: note: each undeclared identifier is reported only once for each function it appears in 438 | ((N) * (YYSIZEOF (yy_state_t) + YYSIZEOF (YYSTYPE)) \ | ^~~~~~~ input.c:75:40: note: in definition of macro 'YY_CAST' 75 | # define YY_CAST(Type, Val) ((Type) (Val)) | ^~~ input.c:962:35: note: in expansion of macro 'YY_CAST' 962 | YYSTACK_ALLOC (YY_CAST (YYSIZE_T, YYSTACK_BYTES (yystacksize)))); | ^~~~~~~ input.c:269:21: note: in expansion of macro 'YY_CAST' 269 | #define YYSIZEOF(X) YY_CAST (YYPTRDIFF_T, sizeof (X)) | ^~~~~~~ input.c:438:38: note: in expansion of macro 'YYSIZEOF' 438 | ((N) * (YYSIZEOF (yy_state_t) + YYSIZEOF (YYSTYPE)) \ | ^~~~~~~~ input.c:962:54: note: in expansion of macro 'YYSTACK_BYTES' 962 | YYSTACK_ALLOC (YY_CAST (YYSIZE_T, YYSTACK_BYTES (yystacksize)))); | ^~~~~~~~~~~~~ input.c:1162:11: warning: implicit declaration of function 'yydestruct' [-Wimplicit-function-declaration] 1162 | yydestruct ("Error: discarding", | ^~~~~~~~~~ ./synclines.at:254: "$PERL" -p -0777 - stderr <<\EOF || exit 77 # Remove left-hand margin. s/^[\d ]{6}\| //gm; # 1. Remove useless lines. # distcc clutter. s/^distcc\[\d+\] .*\n//gm; # c vs. c++. s/^clang: warning: treating 'c' input as 'c\+\+'.*\n//gm; # Function context. s/^[^:]*: In function '[^']+':\n//gm; # Caret error (with possible '~' to underline). s/^ *#error.*\n *\^~*\n//gm; # Number of errors. s/^1 error generated\.\n//gm; # 2. Normalize the lines we kept. # xlc messages. Remove also error identifier (e.g., "1540-0218 (S)"). s/^"(.*?)", line ([\w.]*): \d+-\d+ \(.\) /$1:$2: /gm; # Remove column. s/^([^:]+:\d+)[.:][^:]+:(.+)$/$1:$2/gm; # Map all combinations of "error: " and "#error: " to "#error ". s/^([^:]+:\d+):( |#error|error|:)+/$1: #error /gm; EOF input.y:2: #error "2" ./synclines.at:214: cat stdout 275. synclines.at:214: ok stdout: input.y:1: #error expected '{' before 'break' %union break ^~~~~ input.y:1: #error expected '{' before 'break' %union break ^ input.y:7: #error unknown type name 'YYSTYPE' int yylex (void); ^~~~~~~ input.y:8: #error expected identifier or '(' before '%' token %} ^ input.c:273: #error unknown type name 'yytype_int8' typedef yytype_int8 yy_state_t; ^~~~~~~~~~~ input.c:429: #error unknown type name 'YYSTYPE' YYSTYPE yyvs_alloc; ^~~~~~~ input.c:508: #error unknown type name 'yytype_int8' static const yytype_int8 yytranslate[] = ^~~~~~~~~~~ input.c:581: #error unknown type name 'yytype_int8' static const yytype_int8 yypact[] = ^~~~~~~~~~~ input.c:589: #error unknown type name 'yytype_int8' static const yytype_int8 yydefact[] = ^~~~~~~~~~~ input.c:595: #error unknown type name 'yytype_int8' static const yytype_int8 yypgoto[] = ^~~~~~~~~~~ input.c:601: #error unknown type name 'yytype_int8' static const yytype_int8 yydefgoto[] = ^~~~~~~~~~~ input.c:609: #error unknown type name 'yytype_int8' static const yytype_int8 yytable[] = ^~~~~~~~~~~ input.c:614: #error unknown type name 'yytype_int8' static const yytype_int8 yycheck[] = ^~~~~~~~~~~ input.c:621: #error unknown type name 'yytype_int8' static const yytype_int8 yystos[] = ^~~~~~~~~~~ input.c:627: #error unknown type name 'yytype_int8' static const yytype_int8 yyr1[] = ^~~~~~~~~~~ input.c:633: #error unknown type name 'yytype_int8' static const yytype_int8 yyr2[] = ^~~~~~~~~~~ input.c:828: #error unknown type name 'YYSTYPE' yysymbol_kind_t yykind, YYSTYPE *yyvaluep) ^~~~~~~ input.c:322: #error expected end of line before 'push' _Pragma ("GCC diagnostic push") \ ^~~~~~~ input.c:835: #error note: in expansion of macro 'YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN' YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.c:323: #error expected end of line before 'ignored' _Pragma ("GCC diagnostic ignored \"-Wuninitialized\"") \ ^~~~~~~ input.c:835: #error note: in expansion of macro 'YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN' YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.c:324: #error expected end of line before 'ignored' _Pragma ("GCC diagnostic ignored \"-Wmaybe-uninitialized\"") ^~~~~~~ input.c:835: #error note: in expansion of macro 'YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN' YY_IGNORE_MAYBE_UNINITIALIZED_BEGIN ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.c:327: #error expected end of line before 'pop' _Pragma ("GCC diagnostic pop") ^~~~~~~ input.c:837: #error note: in expansion of macro 'YY_IGNORE_MAYBE_UNINITIALIZED_END' YY_IGNORE_MAYBE_UNINITIALIZED_END ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ input.c:845: #error unknown type name 'YYSTYPE' input.c:875: #error unknown type name 'YYSTYPE' YYSTYPE yyvsa[YYINITDEPTH]; ^~~~~~~ input.c:876: #error unknown type name 'YYSTYPE' YYSTYPE *yyvs = yyvsa; ^~~~~~~ input.c:877: #error unknown type name 'YYSTYPE' YYSTYPE *yyvsp = yyvs; ^~~~~~~ input.c:886: #error unknown type name 'YYSTYPE' YYSTYPE yyval; ^~~~~~~ input.c:438: #error 'YYSTYPE' undeclared (first use in this function) ((N) * (YYSIZEOF (yy_state_t) + YYSIZEOF (YYSTYPE)) \ ^~~~~~~ input.c:75: #error note: in definition of macro 'YY_CAST' # define YY_CAST(Type, Val) ((Type) (Val)) ^~~ input.c:962: #error note: in expansion of macro 'YY_CAST' YYSTACK_ALLOC (YY_CAST (YYSIZE_T, YYSTACK_BYTES (yystacksize)))); ^~~~~~~ input.c:269: #error note: in expansion of macro 'YY_CAST' #define YYSIZEOF(X) YY_CAST (YYPTRDIFF_T, sizeof (X)) ^~~~~~~ input.c:438: #error note: in expansion of macro 'YYSIZEOF' ((N) * (YYSIZEOF (yy_state_t) + YYSIZEOF (YYSTYPE)) \ ^~~~~~~~ input.c:962: #error note: in expansion of macro 'YYSTACK_BYTES' YYSTACK_ALLOC (YY_CAST (YYSIZE_T, YYSTACK_BYTES (yystacksize)))); ^~~~~~~~~~~~~ input.c:438: #error note: each undeclared identifier is reported only once for each function it appears in ((N) * (YYSIZEOF (yy_state_t) + YYSIZEOF (YYSTYPE)) \ ^~~~~~~ input.c:75: #error note: in definition of macro 'YY_CAST' # define YY_CAST(Type, Val) ((Type) (Val)) ^~~ input.c:962: #error note: in expansion of macro 'YY_CAST' YYSTACK_ALLOC (YY_CAST (YYSIZE_T, YYSTACK_BYTES (yystacksize)))); ^~~~~~~ input.c:269: #error note: in expansion of macro 'YY_CAST' #define YYSIZEOF(X) YY_CAST (YYPTRDIFF_T, sizeof (X)) ^~~~~~~ input.c:438: #error note: in expansion of macro 'YYSIZEOF' ((N) * (YYSIZEOF (yy_state_t) + YYSIZEOF (YYSTYPE)) \ ^~~~~~~~ input.c:962: #error note: in expansion of macro 'YYSTACK_BYTES' YYSTACK_ALLOC (YY_CAST (YYSIZE_T, YYSTACK_BYTES (yystacksize)))); ^~~~~~~~~~~~~ input.c:1162: #error warning: implicit declaration of function 'yydestruct' [-Wimplicit-function-declaration] yydestruct ("Error: discarding", ^~~~~~~~~~ ./synclines.at:255: grep '^input.y:1' stdout stdout: input.y:1: #error expected '{' before 'break' input.y:1: #error expected '{' before 'break' 276. synclines.at:237: ok 277. synclines.at:264: testing Postprologue syncline ... ./synclines.at:264: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 278. synclines.at:291: testing Action syncline ... ./synclines.at:291: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./synclines.at:291: $CC $CFLAGS $CPPFLAGS -c syncline.c ./synclines.at:264: $CC $CFLAGS $CPPFLAGS -c syncline.c stderr: syncline.c: In function 'foo': syncline.c:4:2: error: #error "4" 4 | #error "4" | ^~~~~ stderr: ./synclines.at:291: "$PERL" -p -0777 - stderr <<\EOF || exit 77 # Remove left-hand margin. s/^[\d ]{6}\| //gm; # 1. Remove useless lines. # distcc clutter. s/^distcc\[\d+\] .*\n//gm; # c vs. c++. s/^clang: warning: treating 'c' input as 'c\+\+'.*\n//gm; # Function context. s/^[^:]*: In function '[^']+':\n//gm; # Caret error (with possible '~' to underline). s/^ *#error.*\n *\^~*\n//gm; # Number of errors. s/^1 error generated\.\n//gm; # 2. Normalize the lines we kept. # xlc messages. Remove also error identifier (e.g., "1540-0218 (S)"). s/^"(.*?)", line ([\w.]*): \d+-\d+ \(.\) /$1:$2: /gm; # Remove column. s/^([^:]+:\d+)[.:][^:]+:(.+)$/$1:$2/gm; # Map all combinations of "error: " and "#error: " to "#error ". s/^([^:]+:\d+):( |#error|error|:)+/$1: #error /gm; EOF syncline.c: In function 'foo': syncline.c:4:2: error: #error "4" 4 | #error "4" | ^~~~~ ./synclines.at:264: "$PERL" -p -0777 - stderr <<\EOF || exit 77 # Remove left-hand margin. s/^[\d ]{6}\| //gm; # 1. Remove useless lines. # distcc clutter. s/^distcc\[\d+\] .*\n//gm; # c vs. c++. s/^clang: warning: treating 'c' input as 'c\+\+'.*\n//gm; # Function context. s/^[^:]*: In function '[^']+':\n//gm; # Caret error (with possible '~' to underline). s/^ *#error.*\n *\^~*\n//gm; # Number of errors. s/^1 error generated\.\n//gm; # 2. Normalize the lines we kept. # xlc messages. Remove also error identifier (e.g., "1540-0218 (S)"). s/^"(.*?)", line ([\w.]*): \d+-\d+ \(.\) /$1:$2: /gm; # Remove column. s/^([^:]+:\d+)[.:][^:]+:(.+)$/$1:$2/gm; # Map all combinations of "error: " and "#error: " to "#error ". s/^([^:]+:\d+):( |#error|error|:)+/$1: #error /gm; EOF stdout: syncline.c:4: #error "4" ./synclines.at:291: test "`cat stdout`" = 'syncline.c:4: #error "4"' || exit 77 stdout: ./synclines.at:291: $CC $CFLAGS $CPPFLAGS -c input.c syncline.c:4: #error "4" ./synclines.at:264: test "`cat stdout`" = 'syncline.c:4: #error "4"' || exit 77 ./synclines.at:264: $CC $CFLAGS $CPPFLAGS -c input.c stderr: input.y: In function 'yyparse': input.y:8:2: error: #error "8" 8 | #error "8" | ^~~~~ ./synclines.at:291: "$PERL" -p -0777 - stderr <<\EOF || exit 77 # Remove left-hand margin. s/^[\d ]{6}\| //gm; # 1. Remove useless lines. # distcc clutter. s/^distcc\[\d+\] .*\n//gm; # c vs. c++. s/^clang: warning: treating 'c' input as 'c\+\+'.*\n//gm; # Function context. s/^[^:]*: In function '[^']+':\n//gm; # Caret error (with possible '~' to underline). s/^ *#error.*\n *\^~*\n//gm; # Number of errors. s/^1 error generated\.\n//gm; # 2. Normalize the lines we kept. # xlc messages. Remove also error identifier (e.g., "1540-0218 (S)"). s/^"(.*?)", line ([\w.]*): \d+-\d+ \(.\) /$1:$2: /gm; # Remove column. s/^([^:]+:\d+)[.:][^:]+:(.+)$/$1:$2/gm; # Map all combinations of "error: " and "#error: " to "#error ". s/^([^:]+:\d+):( |#error|error|:)+/$1: #error /gm; EOF stdout: stderr: input.y:8: #error "8" input.y:13:2: error: #error "13" 13 | #error "13" | ^~~~~ ./synclines.at:291: cat stdout ./synclines.at:264: "$PERL" -p -0777 - stderr <<\EOF || exit 77 # Remove left-hand margin. s/^[\d ]{6}\| //gm; # 1. Remove useless lines. # distcc clutter. s/^distcc\[\d+\] .*\n//gm; # c vs. c++. s/^clang: warning: treating 'c' input as 'c\+\+'.*\n//gm; # Function context. s/^[^:]*: In function '[^']+':\n//gm; # Caret error (with possible '~' to underline). s/^ *#error.*\n *\^~*\n//gm; # Number of errors. s/^1 error generated\.\n//gm; # 2. Normalize the lines we kept. # xlc messages. Remove also error identifier (e.g., "1540-0218 (S)"). s/^"(.*?)", line ([\w.]*): \d+-\d+ \(.\) /$1:$2: /gm; # Remove column. s/^([^:]+:\d+)[.:][^:]+:(.+)$/$1:$2/gm; # Map all combinations of "error: " and "#error: " to "#error ". s/^([^:]+:\d+):( |#error|error|:)+/$1: #error /gm; EOF 278. synclines.at:291: ok stdout: input.y:13: #error "13" ./synclines.at:264: cat stdout 277. synclines.at:264: ok 279. synclines.at:310: testing Epilogue syncline ... ./synclines.at:310: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 280. synclines.at:327: testing %code top syncline ... ./synclines.at:327: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./synclines.at:310: $CC $CFLAGS $CPPFLAGS -c syncline.c ./synclines.at:327: $CC $CFLAGS $CPPFLAGS -c syncline.c stderr: syncline.c: In function 'foo': syncline.c:4:2: error: #error "4" 4 | #error "4" | ^~~~~ ./synclines.at:310: "$PERL" -p -0777 - stderr <<\EOF || exit 77 # Remove left-hand margin. s/^[\d ]{6}\| //gm; # 1. Remove useless lines. # distcc clutter. s/^distcc\[\d+\] .*\n//gm; # c vs. c++. s/^clang: warning: treating 'c' input as 'c\+\+'.*\n//gm; # Function context. s/^[^:]*: In function '[^']+':\n//gm; # Caret error (with possible '~' to underline). s/^ *#error.*\n *\^~*\n//gm; # Number of errors. s/^1 error generated\.\n//gm; # 2. Normalize the lines we kept. # xlc messages. Remove also error identifier (e.g., "1540-0218 (S)"). s/^"(.*?)", line ([\w.]*): \d+-\d+ \(.\) /$1:$2: /gm; # Remove column. s/^([^:]+:\d+)[.:][^:]+:(.+)$/$1:$2/gm; # Map all combinations of "error: " and "#error: " to "#error ". s/^([^:]+:\d+):( |#error|error|:)+/$1: #error /gm; EOF stderr: syncline.c: In function 'foo': syncline.c:4:2: error: #error "4" 4 | #error "4" | ^~~~~ ./synclines.at:327: "$PERL" -p -0777 - stderr <<\EOF || exit 77 # Remove left-hand margin. s/^[\d ]{6}\| //gm; # 1. Remove useless lines. # distcc clutter. s/^distcc\[\d+\] .*\n//gm; # c vs. c++. s/^clang: warning: treating 'c' input as 'c\+\+'.*\n//gm; # Function context. s/^[^:]*: In function '[^']+':\n//gm; # Caret error (with possible '~' to underline). s/^ *#error.*\n *\^~*\n//gm; # Number of errors. s/^1 error generated\.\n//gm; # 2. Normalize the lines we kept. # xlc messages. Remove also error identifier (e.g., "1540-0218 (S)"). s/^"(.*?)", line ([\w.]*): \d+-\d+ \(.\) /$1:$2: /gm; # Remove column. s/^([^:]+:\d+)[.:][^:]+:(.+)$/$1:$2/gm; # Map all combinations of "error: " and "#error: " to "#error ". s/^([^:]+:\d+):( |#error|error|:)+/$1: #error /gm; EOF stdout: syncline.c:4: #error "4" ./synclines.at:310: test "`cat stdout`" = 'syncline.c:4: #error "4"' || exit 77 ./synclines.at:310: $CC $CFLAGS $CPPFLAGS -c input.c stdout: syncline.c:4: #error "4" ./synclines.at:327: test "`cat stdout`" = 'syncline.c:4: #error "4"' || exit 77 ./synclines.at:327: $CC $CFLAGS $CPPFLAGS -c input.c stderr: input.y:8:2: error: #error "8" 8 | #error "8" | ^~~~~ ./synclines.at:310: "$PERL" -p -0777 - stderr <<\EOF || exit 77 # Remove left-hand margin. s/^[\d ]{6}\| //gm; # 1. Remove useless lines. # distcc clutter. s/^distcc\[\d+\] .*\n//gm; # c vs. c++. s/^clang: warning: treating 'c' input as 'c\+\+'.*\n//gm; # Function context. s/^[^:]*: In function '[^']+':\n//gm; # Caret error (with possible '~' to underline). s/^ *#error.*\n *\^~*\n//gm; # Number of errors. s/^1 error generated\.\n//gm; # 2. Normalize the lines we kept. # xlc messages. Remove also error identifier (e.g., "1540-0218 (S)"). s/^"(.*?)", line ([\w.]*): \d+-\d+ \(.\) /$1:$2: /gm; # Remove column. s/^([^:]+:\d+)[.:][^:]+:(.+)$/$1:$2/gm; # Map all combinations of "error: " and "#error: " to "#error ". s/^([^:]+:\d+):( |#error|error|:)+/$1: #error /gm; EOF stdout: input.y:8: #error "8" ./synclines.at:310: cat stdout stderr: input.y:2:2: error: #error "2" 2 | #error "2" | ^~~~~ 279. synclines.at:310: ok ./synclines.at:327: "$PERL" -p -0777 - stderr <<\EOF || exit 77 # Remove left-hand margin. s/^[\d ]{6}\| //gm; # 1. Remove useless lines. # distcc clutter. s/^distcc\[\d+\] .*\n//gm; # c vs. c++. s/^clang: warning: treating 'c' input as 'c\+\+'.*\n//gm; # Function context. s/^[^:]*: In function '[^']+':\n//gm; # Caret error (with possible '~' to underline). s/^ *#error.*\n *\^~*\n//gm; # Number of errors. s/^1 error generated\.\n//gm; # 2. Normalize the lines we kept. # xlc messages. Remove also error identifier (e.g., "1540-0218 (S)"). s/^"(.*?)", line ([\w.]*): \d+-\d+ \(.\) /$1:$2: /gm; # Remove column. s/^([^:]+:\d+)[.:][^:]+:(.+)$/$1:$2/gm; # Map all combinations of "error: " and "#error: " to "#error ". s/^([^:]+:\d+):( |#error|error|:)+/$1: #error /gm; EOF stdout: input.y:2: #error "2" ./synclines.at:327: cat stdout 280. synclines.at:327: ok 281. synclines.at:346: testing %destructor syncline ... ./synclines.at:346: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 282. synclines.at:370: testing %printer syncline ... ./synclines.at:370: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./synclines.at:346: $CC $CFLAGS $CPPFLAGS -c syncline.c ./synclines.at:370: $CC $CFLAGS $CPPFLAGS -c syncline.c stderr: syncline.c: In function 'foo': syncline.c:4:2: error: #error "4" 4 | #error "4" | ^~~~~ ./synclines.at:346: "$PERL" -p -0777 - stderr <<\EOF || exit 77 # Remove left-hand margin. s/^[\d ]{6}\| //gm; # 1. Remove useless lines. # distcc clutter. s/^distcc\[\d+\] .*\n//gm; # c vs. c++. s/^clang: warning: treating 'c' input as 'c\+\+'.*\n//gm; # Function context. s/^[^:]*: In function '[^']+':\n//gm; # Caret error (with possible '~' to underline). s/^ *#error.*\n *\^~*\n//gm; # Number of errors. s/^1 error generated\.\n//gm; # 2. Normalize the lines we kept. # xlc messages. Remove also error identifier (e.g., "1540-0218 (S)"). s/^"(.*?)", line ([\w.]*): \d+-\d+ \(.\) /$1:$2: /gm; # Remove column. s/^([^:]+:\d+)[.:][^:]+:(.+)$/$1:$2/gm; # Map all combinations of "error: " and "#error: " to "#error ". s/^([^:]+:\d+):( |#error|error|:)+/$1: #error /gm; EOF stdout: syncline.c:4: #error "4" ./synclines.at:346: test "`cat stdout`" = 'syncline.c:4: #error "4"' || exit 77 ./synclines.at:346: $CC $CFLAGS $CPPFLAGS -c input.c stderr: syncline.c: In function 'foo': syncline.c:4:2: error: #error "4" 4 | #error "4" | ^~~~~ ./synclines.at:370: "$PERL" -p -0777 - stderr <<\EOF || exit 77 # Remove left-hand margin. s/^[\d ]{6}\| //gm; # 1. Remove useless lines. # distcc clutter. s/^distcc\[\d+\] .*\n//gm; # c vs. c++. s/^clang: warning: treating 'c' input as 'c\+\+'.*\n//gm; # Function context. s/^[^:]*: In function '[^']+':\n//gm; # Caret error (with possible '~' to underline). s/^ *#error.*\n *\^~*\n//gm; # Number of errors. s/^1 error generated\.\n//gm; # 2. Normalize the lines we kept. # xlc messages. Remove also error identifier (e.g., "1540-0218 (S)"). s/^"(.*?)", line ([\w.]*): \d+-\d+ \(.\) /$1:$2: /gm; # Remove column. s/^([^:]+:\d+)[.:][^:]+:(.+)$/$1:$2/gm; # Map all combinations of "error: " and "#error: " to "#error ". s/^([^:]+:\d+):( |#error|error|:)+/$1: #error /gm; EOF stdout: syncline.c:4: #error "4" ./synclines.at:370: test "`cat stdout`" = 'syncline.c:4: #error "4"' || exit 77 ./synclines.at:370: $CC $CFLAGS $CPPFLAGS -c input.c stderr: input.y: In function 'yydestruct': input.y:2:2: error: #error "2" 2 | #error "2" | ^~~~~ ./synclines.at:346: "$PERL" -p -0777 - stderr <<\EOF || exit 77 # Remove left-hand margin. s/^[\d ]{6}\| //gm; # 1. Remove useless lines. # distcc clutter. s/^distcc\[\d+\] .*\n//gm; # c vs. c++. s/^clang: warning: treating 'c' input as 'c\+\+'.*\n//gm; # Function context. s/^[^:]*: In function '[^']+':\n//gm; # Caret error (with possible '~' to underline). s/^ *#error.*\n *\^~*\n//gm; # Number of errors. s/^1 error generated\.\n//gm; # 2. Normalize the lines we kept. # xlc messages. Remove also error identifier (e.g., "1540-0218 (S)"). s/^"(.*?)", line ([\w.]*): \d+-\d+ \(.\) /$1:$2: /gm; # Remove column. s/^([^:]+:\d+)[.:][^:]+:(.+)$/$1:$2/gm; # Map all combinations of "error: " and "#error: " to "#error ". s/^([^:]+:\d+):( |#error|error|:)+/$1: #error /gm; EOF stdout: input.y:2: #error "2" ./synclines.at:346: cat stdout 281. synclines.at:346: ok stderr: input.y: In function 'yy_symbol_value_print': input.y:2:2: error: #error "2" 2 | #error "2" | ^~~~~ ./synclines.at:370: "$PERL" -p -0777 - stderr <<\EOF || exit 77 # Remove left-hand margin. s/^[\d ]{6}\| //gm; # 1. Remove useless lines. # distcc clutter. s/^distcc\[\d+\] .*\n//gm; # c vs. c++. s/^clang: warning: treating 'c' input as 'c\+\+'.*\n//gm; # Function context. s/^[^:]*: In function '[^']+':\n//gm; # Caret error (with possible '~' to underline). s/^ *#error.*\n *\^~*\n//gm; # Number of errors. s/^1 error generated\.\n//gm; # 2. Normalize the lines we kept. # xlc messages. Remove also error identifier (e.g., "1540-0218 (S)"). s/^"(.*?)", line ([\w.]*): \d+-\d+ \(.\) /$1:$2: /gm; # Remove column. s/^([^:]+:\d+)[.:][^:]+:(.+)$/$1:$2/gm; # Map all combinations of "error: " and "#error: " to "#error ". s/^([^:]+:\d+):( |#error|error|:)+/$1: #error /gm; EOF 283. synclines.at:440: testing syncline escapes: yacc.c ... ./synclines.at:440: $CC $CFLAGS $CPPFLAGS \"\\\"\".c -o \"\\\"\" || exit 77 stdout: input.y:2: #error "2" ./synclines.at:370: cat stdout 282. synclines.at:370: ok 284. synclines.at:440: testing syncline escapes: glr.c ... ./synclines.at:440: $CC $CFLAGS $CPPFLAGS \"\\\"\".c -o \"\\\"\" || exit 77 stderr: stdout: ./synclines.at:440: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o \"\\\"\".c \"\\\"\".y ./synclines.at:440: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o \"\\\"\" \"\\\"\".c $LIBS stderr: stdout: ./synclines.at:440: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o \"\\\"\".c \"\\\"\".y ./synclines.at:440: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o \"\\\"\" \"\\\"\".c $LIBS stderr: stdout: 283. synclines.at:440: ok 285. synclines.at:440: testing syncline escapes: lalr1.cc ... ./synclines.at:440: $CXX $CXXFLAGS $CPPFLAGS \"\\\"\".cc -o \"\\\"\" || exit 77 stderr: stdout: ./synclines.at:440: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o \"\\\"\".cc \"\\\"\".y ./synclines.at:440: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o \"\\\"\" \"\\\"\".cc $LIBS stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: 6 reduce/reduce conflicts [-Wconflicts-rr] input.y: warning: reduce/reduce conflict on token A [-Wcounterexamples] First example: . c A A $end First reduce derivation $accept `-> 0: a $end `-> 1: b d `-> 3: %empty . `-> 6: c A A Second example: . c A A $end Second reduce derivation $accept `-> 0: a $end `-> 2: c d `-> 4: %empty . `-> 6: c A A input.y: warning: reduce/reduce conflict on token A [-Wcounterexamples] time limit exceeded: 6.000000 First example: b . c A A $end First reduce derivation $accept `-> 0: a $end `-> 1: b d `-> 5: a `-> 1: b d `-> 3: %empty . `-> 6: c A A Second example: b . A $end Second reduce derivation $accept `-> 0: a $end `-> 1: b d `-> 6: c A `-> 4: %empty . input.y: warning: reduce/reduce conflict on token A [-Wcounterexamples] time limit exceeded: 6.000000 First example: c . c A A $end First reduce derivation $accept `-> 0: a $end `-> 2: c d `-> 5: a `-> 1: b d `-> 3: %empty . `-> 6: c A A Second example: c . A $end Second reduce derivation $accept `-> 0: a $end `-> 2: c d `-> 6: c A `-> 4: %empty . input.y: warning: shift/reduce conflict on token A [-Wcounterexamples] time limit exceeded: 6.000000 First example: b c . A Shift derivation a `-> 1: b d `-> 6: c . A Second example: b c . c A A $end Reduce derivation $accept `-> 0: a $end `-> 1: b d `-> 5: a `-> 2: c d `-> 5: a `-> 1: b d `-> 3: %empty . `-> 6: c A A input.y: warning: reduce/reduce conflict on token A [-Wcounterexamples] First example: b c . c A A $end First reduce derivation $accept `-> 0: a $end `-> 1: b d `-> 5: a `-> 2: c d `-> 5: a `-> 1: b d `-> 3: %empty . `-> 6: c A A Second example: b c . A $end Second reduce derivation $accept `-> 0: a $end `-> 1: b d `-> 5: a `-> 2: c d `-> 6: c A `-> 4: %empty . input.y: warning: shift/reduce conflict on token A [-Wcounterexamples] First example: b c . A Shift derivation a `-> 1: b d `-> 6: c . A Second example: b c . A $end Reduce derivation $accept `-> 0: a $end `-> 1: b d `-> 5: a `-> 2: c d `-> 6: c A `-> 4: %empty . input.y: warning: reduce/reduce conflict on token $end [-Wcounterexamples] Example: b d . First reduce derivation a `-> 1: b d . Second reduce derivation a `-> 1: b d `-> 7: d . input.y: warning: reduce/reduce conflict on token $end [-Wcounterexamples] Example: c d . First reduce derivation a `-> 2: c d . Second reduce derivation a `-> 2: c d `-> 7: d . input.y:5.4: warning: rule useless in parser due to conflicts [-Wother] input.y:6.15: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:621: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g;s/ *$//;' stderr ./counterexample.at:621: YYFLAT=1; export YYFLAT;COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wcounterexamples input.y stderr: stdout: 284. synclines.at:440: ok 286. synclines.at:440: testing syncline escapes: glr.cc ... ./synclines.at:440: $CXX $CXXFLAGS $CPPFLAGS \"\\\"\".cc -o \"\\\"\" || exit 77 stderr: stdout: ./synclines.at:440: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o \"\\\"\".cc \"\\\"\".y ./synclines.at:440: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o \"\\\"\" \"\\\"\".cc $LIBS stderr: stdout: 285. synclines.at:440: ok 287. synclines.at:440: testing syncline escapes: glr2.cc ... ./synclines.at:440: $CXX $CXXFLAGS $CPPFLAGS \"\\\"\".cc -o \"\\\"\" || exit 77 stderr: stdout: ./synclines.at:440: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o \"\\\"\".cc \"\\\"\".y stderr: input.y: warning: 1 shift/reduce conflict [-Wconflicts-sr] input.y: warning: 6 reduce/reduce conflicts [-Wconflicts-rr] input.y: warning: reduce/reduce conflict on token A [-Wcounterexamples] First example . c A A $end First reduce derivation $accept -> [ a -> [ b -> [ . ] d -> [ c A A ] ] $end ] Second example . c A A $end Second reduce derivation $accept -> [ a -> [ c -> [ . ] d -> [ c A A ] ] $end ] input.y: warning: reduce/reduce conflict on token A [-Wcounterexamples] time limit exceeded: 6.000000 First example b . c A A $end First reduce derivation $accept -> [ a -> [ b d -> [ a -> [ b -> [ . ] d -> [ c A A ] ] ] ] $end ] Second example b . A $end Second reduce derivation $accept -> [ a -> [ b d -> [ c -> [ . ] A ] ] $end ] input.y: warning: reduce/reduce conflict on token A [-Wcounterexamples] time limit exceeded: 6.000000 First example c . c A A $end First reduce derivation $accept -> [ a -> [ c d -> [ a -> [ b -> [ . ] d -> [ c A A ] ] ] ] $end ] Second example c . A $end Second reduce derivation $accept -> [ a -> [ c d -> [ c -> [ . ] A ] ] $end ] input.y: warning: shift/reduce conflict on token A [-Wcounterexamples] time limit exceeded: 6.000000 First example b c . A Shift derivation a -> [ b d -> [ c . A ] ] Second example b c . c A A $end Reduce derivation $accept -> [ a -> [ b d -> [ a -> [ c d -> [ a -> [ b -> [ . ] d -> [ c A A ] ] ] ] ] ] $end ] input.y: warning: reduce/reduce conflict on token A [-Wcounterexamples] First example b c . c A A $end First reduce derivation $accept -> [ a -> [ b d -> [ a -> [ c d -> [ a -> [ b -> [ . ] d -> [ c A A ] ] ] ] ] ] $end ] Second example b c . A $end Second reduce derivation $accept -> [ a -> [ b d -> [ a -> [ c d -> [ c -> [ . ] A ] ] ] ] $end ] input.y: warning: shift/reduce conflict on token A [-Wcounterexamples] First example b c . A Shift derivation a -> [ b d -> [ c . A ] ] Second example b c . A $end Reduce derivation $accept -> [ a -> [ b d -> [ a -> [ c d -> [ c -> [ . ] A ] ] ] ] $end ] input.y: warning: reduce/reduce conflict on token $end [-Wcounterexamples] Example b d . First reduce derivation a -> [ b d . ] Second reduce derivation a -> [ b d -> [ d . ] ] input.y: warning: reduce/reduce conflict on token $end [-Wcounterexamples] Example c d . First reduce derivation a -> [ c d . ] Second reduce derivation a -> [ c d -> [ d . ] ] input.y:5.4: warning: rule useless in parser due to conflicts [-Wother] input.y:6.15: warning: rule useless in parser due to conflicts [-Wother] ./counterexample.at:621: sed -e 's/time limit exceeded: [0-9][.0-9]*/time limit exceeded: XXX/g' stderr 270. counterexample.at:610: ok ./synclines.at:440: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o \"\\\"\" \"\\\"\".cc $LIBS 288. synclines.at:497: testing %no-lines: yacc.c ... ./synclines.at:497: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --no-lines -o input.c -d input.y ./synclines.at:497: mv input.c without.c ./synclines.at:497: mv input.h without.h ./synclines.at:497: grep '#line' *.c *.h ./synclines.at:497: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c -d input.y ./synclines.at:497: mv input.c with.c ./synclines.at:497: mv input.h with.h ./synclines.at:497: grep -v '#line' with.c >expout ./synclines.at:497: cat without.c ./synclines.at:497: grep -v '#line' with.h >expout ./synclines.at:497: cat without.h 288. synclines.at:497: ok 289. synclines.at:497: testing %no-lines: glr.c ... ./synclines.at:497: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --no-lines -o input.c -d input.y ./synclines.at:497: mv input.c without.c ./synclines.at:497: mv input.h without.h ./synclines.at:497: grep '#line' *.c *.h ./synclines.at:497: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c -d input.y ./synclines.at:497: mv input.c with.c ./synclines.at:497: mv input.h with.h ./synclines.at:497: grep -v '#line' with.c >expout ./synclines.at:497: cat without.c ./synclines.at:497: grep -v '#line' with.h >expout ./synclines.at:497: cat without.h 289. synclines.at:497: ok 290. synclines.at:497: testing %no-lines: lalr1.cc ... ./synclines.at:497: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --no-lines -o input.cc -d input.y ./synclines.at:497: mv input.cc without.cc ./synclines.at:497: mv input.hh without.hh ./synclines.at:497: grep '#line' *.cc *.hh ./synclines.at:497: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc -d input.y ./synclines.at:497: mv input.cc with.cc ./synclines.at:497: mv input.hh with.hh ./synclines.at:497: grep -v '#line' with.cc >expout ./synclines.at:497: cat without.cc ./synclines.at:497: grep -v '#line' with.hh >expout ./synclines.at:497: cat without.hh 290. synclines.at:497: ok 291. synclines.at:497: testing %no-lines: glr.cc ... ./synclines.at:497: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --no-lines -o input.cc -d input.y ./synclines.at:497: mv input.cc without.cc ./synclines.at:497: mv input.hh without.hh ./synclines.at:497: grep '#line' *.cc *.hh ./synclines.at:497: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc -d input.y ./synclines.at:497: mv input.cc with.cc ./synclines.at:497: mv input.hh with.hh ./synclines.at:497: grep -v '#line' with.cc >expout ./synclines.at:497: cat without.cc ./synclines.at:497: grep -v '#line' with.hh >expout ./synclines.at:497: cat without.hh 291. synclines.at:497: ok 292. synclines.at:497: testing %no-lines: glr2.cc ... ./synclines.at:497: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --no-lines -o input.cc -d input.y ./synclines.at:497: mv input.cc without.cc ./synclines.at:497: mv input.hh without.hh ./synclines.at:497: grep '#line' *.cc *.hh ./synclines.at:497: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc -d input.y ./synclines.at:497: mv input.cc with.cc ./synclines.at:497: mv input.hh with.hh ./synclines.at:497: grep -v '#line' with.cc >expout ./synclines.at:497: cat without.cc ./synclines.at:497: grep -v '#line' with.hh >expout ./synclines.at:497: cat without.hh 292. synclines.at:497: ok 293. synclines.at:507: testing Output columns ... ./synclines.at:540: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./synclines.at:541: sed -ne '/--BEGIN/,/--END/{' \ -e '/input.c/s/ [0-9]* / LINE /;' \ -e 'p;}' \ input.c 293. synclines.at:507: ok 294. headers.at:56: testing Invalid CPP guards: --defines=input/input.h ... ./headers.at:56: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --defines=input/input.h --output=input/input.c input/input.y ./headers.at:56: $CC $CFLAGS $CPPFLAGS -c -o input/input.o -I. -c input/input.c stderr: stdout: 286. synclines.at:440: ok 295. headers.at:57: testing Invalid CPP guards: --defines=9foo.h ... ./headers.at:57: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --defines=9foo.h --output=9foo.c 9foo.y ./headers.at:57: $CC $CFLAGS $CPPFLAGS -c -o 9foo.o -I. -c 9foo.c stderr: stdout: 294. headers.at:56: ok stderr: 296. headers.at:58: testing Invalid CPP guards: %glr-parser --defines=input/input.h ... stdout: 295. headers.at:57: ok ./headers.at:58: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --defines=input/input.h --output=input/input.c input/input.y 297. headers.at:59: testing Invalid CPP guards: %glr-parser --defines=9foo.h ... ./headers.at:59: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --defines=9foo.h --output=9foo.c 9foo.y ./headers.at:58: $CC $CFLAGS $CPPFLAGS -c -o input/input.o -I. -c input/input.c ./headers.at:59: $CC $CFLAGS $CPPFLAGS -c -o 9foo.o -I. -c 9foo.c stderr: stdout: 296. headers.at:58: ok 298. headers.at:67: testing export YYLTYPE ... ./headers.at:85: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --header -o input.c input.y stderr: stdout: 297. headers.at:59: ok ./headers.at:85: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --header -o input.c input.y -Werror 299. headers.at:177: testing Sane headers: ... ./headers.at:177: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o input.c input.y stderr: input.y:11.1-18: error: deprecated directive: '%name-prefix "my_"', use '%define api.prefix {my_}' [-Werror=deprecated] input.y: error: fix-its can be applied. Rerun with option '--update'. [-Werror=other] ./headers.at:85: sed 's,.*/$,,' stderr 1>&2 ./headers.at:177: $CC $CFLAGS $CPPFLAGS -c -o input.o input.c ./headers.at:85: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --header -o input.c input.y --warnings=error ./headers.at:85: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --header -o input.c input.y -Wnone,none -Werror --trace=none ./headers.at:85: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret --header -o input.c input.y --warnings=none -Werror --trace=none ./headers.at:102: $CC $CFLAGS $CPPFLAGS -c -o caller.o caller.c stderr: stdout: ./headers.at:103: $CC $CFLAGS $CPPFLAGS -c -o input.o input.c stderr: stdout: ./headers.at:104: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o caller caller.o input.o $LIBS stderr: stdout: ./headers.at:177: $CC $CFLAGS $CPPFLAGS -c -o $h.o $h.c stderr: stdout: stderr: ./headers.at:105: $PREPARSER ./caller stdout: 299. headers.at:177: ok stderr: ./headers.at:105: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 298. headers.at:67: ok 300. headers.at:178: testing Sane headers: %locations %debug ... ./headers.at:178: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o input.c input.y 301. headers.at:180: testing Sane headers: %glr-parser ... ./headers.at:180: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o input.c input.y ./headers.at:180: $CC $CFLAGS $CPPFLAGS -c -o input.o input.c ./headers.at:178: $CC $CFLAGS $CPPFLAGS -c -o input.o input.c stderr: stdout: ./headers.at:178: $CC $CFLAGS $CPPFLAGS -c -o $h.o $h.c stderr: stdout: 300. headers.at:178: ok 302. headers.at:181: testing Sane headers: %locations %debug %glr-parser ... ./headers.at:181: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o input.c input.y ./headers.at:181: $CC $CFLAGS $CPPFLAGS -c -o input.o input.c stderr: stdout: ./headers.at:180: $CC $CFLAGS $CPPFLAGS -c -o $h.o $h.c stderr: stdout: 301. headers.at:180: ok 303. headers.at:183: testing Sane headers: api.pure ... ./headers.at:183: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o input.c input.y ./headers.at:183: $CC $CFLAGS $CPPFLAGS -c -o input.o input.c stderr: In file included from /usr/include/c++/12/vector:70, from "\"".cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at "\"".cc:2080:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at "\"".cc:1919:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at "\"".cc:2342:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at "\"".cc:2080:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at "\"".cc:1905:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at "\"".cc:2744:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at "\"".cc:2080:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at "\"".cc:1905:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at "\"".cc:2727:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at "\"".cc:2715:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: 287. synclines.at:440: ok 304. headers.at:184: testing Sane headers: api.push-pull=both ... ./headers.at:184: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o input.c input.y stderr: stdout: ./headers.at:183: $CC $CFLAGS $CPPFLAGS -c -o $h.o $h.c ./headers.at:184: $CC $CFLAGS $CPPFLAGS -c -o input.o input.c stderr: stdout: 303. headers.at:183: ok 305. headers.at:185: testing Sane headers: api.pure api.push-pull=both ... ./headers.at:185: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o input.c input.y ./headers.at:185: $CC $CFLAGS $CPPFLAGS -c -o input.o input.c stderr: stdout: ./headers.at:184: $CC $CFLAGS $CPPFLAGS -c -o $h.o $h.c stderr: stdout: 304. headers.at:184: ok 306. headers.at:187: testing Sane headers: c++ ... ./headers.at:187: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o input.cc input.y stderr: stdout: ./headers.at:185: $CC $CFLAGS $CPPFLAGS -c -o $h.o $h.c ./headers.at:187: $CXX $CPPFLAGS $CXXFLAGS -c -o input.o input.cc stderr: stdout: 305. headers.at:185: ok 307. headers.at:188: testing Sane headers: %locations %debug c++ ... ./headers.at:188: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o input.cc input.y ./headers.at:188: $CXX $CPPFLAGS $CXXFLAGS -c -o input.o input.cc stderr: stdout: ./headers.at:181: $CC $CFLAGS $CPPFLAGS -c -o $h.o $h.c stderr: stdout: 302. headers.at:181: ok 308. headers.at:189: testing Sane headers: c++ api.value.type=variant parse.assert ... ./headers.at:189: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o input.cc input.y ./headers.at:189: $CXX $CPPFLAGS $CXXFLAGS -c -o input.o input.cc stderr: stdout: ./headers.at:187: $CXX $CPPFLAGS $CXXFLAGS -c -o $h.o $h.cc stderr: In file included from /usr/include/c++/12/vector:70, from input.hh:53, from input.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.hh:1023:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./headers.at:189: $CXX $CPPFLAGS $CXXFLAGS -c -o $h.o $h.cc stderr: stdout: ./headers.at:188: $CXX $CPPFLAGS $CXXFLAGS -c -o $h.o $h.cc stderr: stdout: ./headers.at:187: $CXX $CPPFLAGS $CXXFLAGS -c -o $h.o $h.cc stderr: stdout: 306. headers.at:187: ok 309. headers.at:191: testing Sane headers: %locations c++ %glr-parser ... ./headers.at:191: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o input.cc input.y ./headers.at:191: $CXX $CPPFLAGS $CXXFLAGS -c -o input.o input.cc stderr: stdout: ./headers.at:189: $CXX $CPPFLAGS $CXXFLAGS -c -o $h.o $h.cc stderr: stdout: 308. headers.at:189: ok 310. headers.at:199: testing Several parsers ... ./headers.at:320: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o x1.c x1.y ./headers.at:320: $CC $CFLAGS $CPPFLAGS -c -o x1.o x1.c stderr: stdout: ./headers.at:188: $CXX $CPPFLAGS $CXXFLAGS -c -o $h.o $h.cc stderr: stdout: ./headers.at:320: echo "x1" >>expout ./headers.at:321: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o x2.c x2.y ./headers.at:321: $CC $CFLAGS $CPPFLAGS -c -o x2.o x2.c stderr: stdout: ./headers.at:321: echo "x2" >>expout ./headers.at:322: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o x3.c x3.y stderr: stdout: ./headers.at:188: $CXX $CPPFLAGS $CXXFLAGS -c -o $h.o $h.cc ./headers.at:322: $CC $CFLAGS $CPPFLAGS -c -o x3.o x3.c stderr: stdout: ./headers.at:191: $CXX $CPPFLAGS $CXXFLAGS -c -o $h.o $h.cc stderr: stdout: ./headers.at:188: $CXX $CPPFLAGS $CXXFLAGS -c -o $h.o $h.cc stderr: stdout: 307. headers.at:188: ok 311. actions.at:24: testing Midrule actions ... ./actions.at:59: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -v -o input.c input.y ./actions.at:60: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./headers.at:322: echo "x3" >>expout ./headers.at:323: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o x4.c x4.y ./headers.at:323: $CC $CFLAGS $CPPFLAGS -c -o x4.o x4.c stderr: stdout: ./headers.at:191: $CXX $CPPFLAGS $CXXFLAGS -c -o $h.o $h.cc stderr: stdout: ./actions.at:61: $PREPARSER ./input stderr: ./actions.at:61: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 311. actions.at:24: ok 312. actions.at:72: testing Typed midrule actions ... ./actions.at:109: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -v -o input.c input.y ./actions.at:110: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./headers.at:191: $CXX $CPPFLAGS $CXXFLAGS -c -o $h.o $h.cc stderr: stdout: ./actions.at:111: $PREPARSER ./input stderr: ./actions.at:111: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 312. actions.at:72: ok 313. actions.at:122: testing Implicitly empty rule ... ./actions.at:133: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -Wempty-rule 1.y ./actions.at:133: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wempty-rule 1.y -Werror stderr: 1.y:11.17-18: error: empty rule without %empty [-Werror=empty-rule] 11 | a: /* empty. */ {}; | ^~ | %empty 1.y: error: fix-its can be applied. Rerun with option '--update'. [-Werror=other] ./actions.at:133: sed 's,.*/$,,' stderr 1>&2 ./actions.at:133: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wempty-rule 1.y --warnings=error ./actions.at:133: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wempty-rule 1.y -Wnone,none -Werror --trace=none ./actions.at:133: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -Wempty-rule 1.y --warnings=none -Werror --trace=none ./actions.at:149: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret 2.y stderr: stdout: ./headers.at:323: echo "x4" >>expout ./headers.at:324: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o x5.cc x5.y ./actions.at:149: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret 2.y -Werror stderr: 2.y:11.17-18: error: empty rule without %empty [-Werror=empty-rule] 11 | a: /* empty. */ {}; | ^~ | %empty 2.y:13.17-18: error: empty rule without %empty [-Werror=empty-rule] 13 | c: /* empty. */ {}; | ^~ | %empty 2.y: error: fix-its can be applied. Rerun with option '--update'. [-Werror=other] ./actions.at:149: sed 's,.*/$,,' stderr 1>&2 ./actions.at:149: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret 2.y --warnings=error ./headers.at:324: $CXX $CPPFLAGS $CXXFLAGS -c -o x5.o x5.cc ./actions.at:149: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret 2.y -Wnone,none -Werror --trace=none ./actions.at:149: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret 2.y --warnings=none -Werror --trace=none ./actions.at:161: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -Wno-empty-rule 2.y 313. actions.at:122: ok 314. actions.at:172: testing Invalid uses of %empty ... ./actions.at:182: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret one.y ./actions.at:192: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -u one.y ./actions.at:202: sed -e '1,8d' one.y ./actions.at:219: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret two.y 314. actions.at:172: ok 315. actions.at:240: testing Valid uses of %empty ... ./actions.at:259: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y stderr: stdout: 309. headers.at:191: ok ./actions.at:259: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS 316. actions.at:270: testing Add missing %empty ... ./actions.at:285: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --update -Wall input.y stderr: input.y:3.4-5: warning: empty rule without %empty [-Wempty-rule] input.y:4.3-5.1: warning: empty rule without %empty [-Wempty-rule] input.y:6.3: warning: empty rule without %empty [-Wempty-rule] input.y:8.2: warning: empty rule without %empty [-Wempty-rule] input.y:9.3: warning: empty rule without %empty [-Wempty-rule] bison: file 'input.y' was updated (backup: 'input.y~') ./actions.at:286: cat input.y ./actions.at:300: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall input.y 316. actions.at:270: ok 317. actions.at:365: testing Initial location: yacc.c ... ./actions.at:365: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:365: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./actions.at:260: $PREPARSER ./input stderr: ./actions.at:260: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 315. actions.at:240: ok 318. actions.at:366: testing Initial location: yacc.c api.pure=full ... ./actions.at:366: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:366: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./actions.at:365: $PREPARSER ./input stderr: 1.1 1.1: syntax error ./actions.at:365: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 317. actions.at:365: ok 319. actions.at:367: testing Initial location: yacc.c api.pure %parse-param { int x } ... ./actions.at:367: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:367: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./actions.at:366: $PREPARSER ./input stderr: 1.1 1.1: syntax error ./actions.at:366: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 318. actions.at:366: ok 320. actions.at:368: testing Initial location: yacc.c api.push-pull=both ... ./actions.at:368: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:368: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./actions.at:367: $PREPARSER ./input stderr: 1.1 1.1: syntax error ./actions.at:367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 319. actions.at:367: ok 321. actions.at:369: testing Initial location: yacc.c api.push-pull=both api.pure=full ... ./actions.at:369: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:369: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./actions.at:368: $PREPARSER ./input stderr: 1.1 1.1: syntax error ./actions.at:368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 320. actions.at:368: ok 322. actions.at:370: testing Initial location: glr.c ... ./actions.at:370: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:370: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./actions.at:369: $PREPARSER ./input stderr: 1.1 1.1: syntax error ./actions.at:369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 321. actions.at:369: ok 323. actions.at:371: testing Initial location: glr.c api.pure ... ./actions.at:371: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y stderr: stdout: ./headers.at:324: echo "x5" >>expout ./headers.at:325: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o x6.c x6.y ./actions.at:371: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./headers.at:325: $CC $CFLAGS $CPPFLAGS -c -o x6.o x6.c stderr: stdout: ./headers.at:325: echo "x6" >>expout ./headers.at:326: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o x7.c x7.y ./headers.at:326: $CC $CFLAGS $CPPFLAGS -c -o x7.o x7.c stderr: stdout: ./headers.at:326: echo "x7" >>expout ./headers.at:327: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o x8.c x8.y stderr: stdout: ./actions.at:370: $PREPARSER ./input stderr: 1.1 1.1: syntax error ./actions.at:370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 322. actions.at:370: ok ./headers.at:327: $CC $CFLAGS $CPPFLAGS -c -o x8.o x8.c 324. actions.at:372: testing Initial location: lalr1.cc ... ./actions.at:372: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./actions.at:372: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./actions.at:371: $PREPARSER ./input stderr: 1.1 1.1: syntax error ./actions.at:371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 323. actions.at:371: ok 325. actions.at:373: testing Initial location: glr.cc ... ./actions.at:373: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./actions.at:373: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./headers.at:327: echo "x8" >>expout ./headers.at:328: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o x9.cc x9.y ./headers.at:328: $CXX $CPPFLAGS $CXXFLAGS -c -o x9.o x9.cc stderr: stdout: ./actions.at:372: $PREPARSER ./input stderr: 1.1 1.1: syntax error ./actions.at:372: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 324. actions.at:372: ok 326. actions.at:374: testing Initial location: glr2.cc ... ./actions.at:374: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./actions.at:374: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./actions.at:373: $PREPARSER ./input stderr: 1.1 1.1: syntax error ./actions.at:373: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 325. actions.at:373: ok 327. actions.at:383: testing Initial location: yacc.c api.pure=full ... ./actions.at:383: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y stderr: stdout: ./headers.at:328: echo "x9" >>expout ./headers.at:329: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o xa.cc xa.y ./actions.at:383: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./headers.at:329: $CXX $CPPFLAGS $CXXFLAGS -c -o xa.o xa.cc stderr: stdout: ./actions.at:383: $PREPARSER ./input stderr: : syntax error ./actions.at:383: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 327. actions.at:383: ok 328. actions.at:394: testing Initial location: yacc.c api.pure=full ... ./actions.at:394: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:394: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./actions.at:394: $PREPARSER ./input stderr: 0 0: syntax error ./actions.at:394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 328. actions.at:394: ok 329. actions.at:478: testing Location print: yacc.c ... ./actions.at:478: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:478: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./actions.at:478: $PREPARSER ./input stderr: ./actions.at:478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 329. actions.at:478: ok 330. actions.at:478: testing Location print: glr.c ... ./actions.at:478: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:478: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from xa.hh:58, from xa.cc:53: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {xa_::parser::stack_symbol_type}; _Tp = xa_::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {xa_::parser::stack_symbol_type}; _Tp = xa_::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = xa_::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void xa_::parser::stack::push(T&&) [with T = xa_::parser::stack_symbol_type; S = std::vector >]' at xa.hh:1017:24, inlined from 'void xa_::parser::yypush_(const char*, stack_symbol_type&&)' at xa.cc:408:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {xa_::parser::stack_symbol_type}; _Tp = xa_::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = xa_::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void xa_::parser::stack::push(T&&) [with T = xa_::parser::stack_symbol_type; S = std::vector >]' at xa.hh:1017:24, inlined from 'void xa_::parser::yypush_(const char*, stack_symbol_type&&)' at xa.cc:408:19, inlined from 'void xa_::parser::yypush_(const char*, state_type, symbol_type&&)' at xa.cc:415:13: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {xa_::parser::stack_symbol_type}; _Tp = xa_::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = xa_::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void xa_::parser::stack::push(T&&) [with T = xa_::parser::stack_symbol_type; S = std::vector >]' at xa.hh:1017:24, inlined from 'void xa_::parser::yypush_(const char*, stack_symbol_type&&)' at xa.cc:408:19, inlined from 'virtual int xa_::parser::parse()' at xa.cc:739:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {xa_::parser::stack_symbol_type}; _Tp = xa_::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = xa_::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void xa_::parser::stack::push(T&&) [with T = xa_::parser::stack_symbol_type; S = std::vector >]' at xa.hh:1017:24, inlined from 'void xa_::parser::yypush_(const char*, stack_symbol_type&&)' at xa.cc:408:19, inlined from 'virtual int xa_::parser::parse()' at xa.cc:833:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./headers.at:329: echo "xa" >>expout ./headers.at:330: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o xb.cc xb.y ./headers.at:330: $CXX $CPPFLAGS $CXXFLAGS -c -o xb.o xb.cc stderr: stdout: ./actions.at:478: $PREPARSER ./input stderr: ./actions.at:478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 330. actions.at:478: ok 331. actions.at:478: testing Location print: lalr1.cc ... ./actions.at:478: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./actions.at:478: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2379:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at input.cc:2215:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2645:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2379:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:2201:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&, const yy::parser::location_type&)' at input.cc:3082:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2379:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:2201:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:3065:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at input.cc:3053:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./actions.at:374: $PREPARSER ./input stderr: 1.1 1.1: syntax error ./actions.at:374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 326. actions.at:374: ok 332. actions.at:478: testing Location print: glr.cc ... ./actions.at:478: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./actions.at:478: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from xb.hh:53, from xb.cc:53: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {xb_::parser::stack_symbol_type}; _Tp = xb_::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {xb_::parser::stack_symbol_type}; _Tp = xb_::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = xb_::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void xb_::parser::stack::push(T&&) [with T = xb_::parser::stack_symbol_type; S = std::vector >]' at xb.hh:1253:24, inlined from 'void xb_::parser::yypush_(const char*, stack_symbol_type&&)' at xb.cc:408:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {xb_::parser::stack_symbol_type}; _Tp = xb_::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = xb_::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void xb_::parser::stack::push(T&&) [with T = xb_::parser::stack_symbol_type; S = std::vector >]' at xb.hh:1253:24, inlined from 'void xb_::parser::yypush_(const char*, stack_symbol_type&&)' at xb.cc:408:19, inlined from 'void xb_::parser::yypush_(const char*, state_type, symbol_type&&)' at xb.cc:415:13: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {xb_::parser::stack_symbol_type}; _Tp = xb_::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = xb_::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void xb_::parser::stack::push(T&&) [with T = xb_::parser::stack_symbol_type; S = std::vector >]' at xb.hh:1253:24, inlined from 'void xb_::parser::yypush_(const char*, stack_symbol_type&&)' at xb.cc:408:19, inlined from 'virtual int xb_::parser::parse()' at xb.cc:739:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {xb_::parser::stack_symbol_type}; _Tp = xb_::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = xb_::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void xb_::parser::stack::push(T&&) [with T = xb_::parser::stack_symbol_type; S = std::vector >]' at xb.hh:1253:24, inlined from 'void xb_::parser::yypush_(const char*, stack_symbol_type&&)' at xb.cc:408:19, inlined from 'virtual int xb_::parser::parse()' at xb.cc:833:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./headers.at:330: echo "xb" >>expout ./headers.at:331: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o xc.cc xc.y ./headers.at:331: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS -c -o xc.o xc.cc stderr: stdout: ./actions.at:478: $PREPARSER ./input stderr: ./actions.at:478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 331. actions.at:478: ok 333. actions.at:478: testing Location print: glr2.cc ... ./actions.at:478: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./actions.at:478: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./actions.at:478: $PREPARSER ./input stderr: ./actions.at:478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 332. actions.at:478: ok 334. actions.at:488: testing Exotic Dollars ... ./actions.at:532: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -v -o input.c input.y ./actions.at:533: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./actions.at:534: $PREPARSER ./input stderr: ./actions.at:534: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:562: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:562: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./actions.at:563: $PREPARSER ./input stderr: ./actions.at:563: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 334. actions.at:488: ok 335. actions.at:1047: testing Printers and Destructors ... ./actions.at:1047: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:1047: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./actions.at:1047: $PREPARSER ./input '(x)' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (0@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1047: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1047: $PREPARSER ./input '!' stderr: sending: '!' (0@0-9) sending: END (1@10-19) raise (4@9-9): %empty check-spontaneous-errors (5@9-19): error (@9-19) Freeing token END (1@10-19) Freeing nterm input (5@0-19) Successful parse. ./actions.at:1047: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1047: $PREPARSER ./input '!!!' stderr: sending: '!' (0@0-9) sending: '!' (1@10-19) sending: '!' (2@20-29) raise (5@10-29): ! (1@20-29) ! (2@20-29) check-spontaneous-errors (5@10-29): error (@10-29) sending: END (3@30-39) Freeing token END (3@30-39) Freeing nterm input (5@0-29) Successful parse. ./actions.at:1047: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1047: $PREPARSER ./input '(y)' stderr: sending: '(' (0@0-9) sending: 'y' (1@10-19) 10.10-19.18: syntax error, unexpected 'y', expecting 'x' Freeing token 'y' (1@10-19) sending: ')' (2@20-29) line (-1@0-29): '(' (0@0-9) error (@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (-1@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1047: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1047: $PREPARSER ./input '(xxxxx)(x)(x)y' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: 'x' (2@20-29) thing (2@20-29): 'x' (2@20-29) sending: 'x' (3@30-39) 30.30-39.38: syntax error, unexpected 'x', expecting ')' Freeing nterm thing (2@20-29) Freeing nterm thing (1@10-19) Freeing token 'x' (3@30-39) sending: 'x' (4@40-49) Freeing token 'x' (4@40-49) sending: 'x' (5@50-59) Freeing token 'x' (5@50-59) sending: ')' (6@60-69) line (-1@0-69): '(' (0@0-9) error (@10-59) ')' (6@60-69) sending: '(' (7@70-79) sending: 'x' (8@80-89) thing (8@80-89): 'x' (8@80-89) sending: ')' (9@90-99) line (7@70-99): '(' (7@70-79) thing (8@80-89) ')' (9@90-99) sending: '(' (10@100-109) sending: 'x' (11@110-119) thing (11@110-119): 'x' (11@110-119) sending: ')' (12@120-129) line (10@100-129): '(' (10@100-109) thing (11@110-119) ')' (12@120-129) sending: 'y' (13@130-139) input (0@129-129): /* Nothing */ input (2@100-129): line (10@100-129) input (0@129-129) input (2@70-129): line (7@70-99) input (2@100-129) input (2@0-129): line (-1@0-69) input (2@70-129) 130.130-139.138: syntax error, unexpected 'y', expecting END Freeing nterm input (2@0-129) Freeing token 'y' (13@130-139) Parsing FAILED. ./actions.at:1047: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1047: $PREPARSER ./input '(x)(x)x' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: '(' (3@30-39) sending: 'x' (4@40-49) thing (4@40-49): 'x' (4@40-49) sending: ')' (5@50-59) line (3@30-59): '(' (3@30-39) thing (4@40-49) ')' (5@50-59) sending: 'x' (6@60-69) thing (6@60-69): 'x' (6@60-69) sending: END (7@70-79) 70.70-79.78: syntax error, unexpected END, expecting 'x' Freeing nterm thing (6@60-69) Freeing nterm line (3@30-59) Freeing nterm line (0@0-29) Freeing token END (7@70-79) Parsing FAILED. ./actions.at:1047: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1047: $PREPARSER ./input '(x)(x)(x)(x)(x)(x)(x)' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: '(' (3@30-39) sending: 'x' (4@40-49) thing (4@40-49): 'x' (4@40-49) sending: ')' (5@50-59) line (3@30-59): '(' (3@30-39) thing (4@40-49) ')' (5@50-59) sending: '(' (6@60-69) sending: 'x' (7@70-79) thing (7@70-79): 'x' (7@70-79) sending: ')' (8@80-89) line (6@60-89): '(' (6@60-69) thing (7@70-79) ')' (8@80-89) sending: '(' (9@90-99) sending: 'x' (10@100-109) thing (10@100-109): 'x' (10@100-109) sending: ')' (11@110-119) line (9@90-119): '(' (9@90-99) thing (10@100-109) ')' (11@110-119) sending: '(' (12@120-129) sending: 'x' (13@130-139) thing (13@130-139): 'x' (13@130-139) sending: ')' (14@140-149) line (12@120-149): '(' (12@120-129) thing (13@130-139) ')' (14@140-149) sending: '(' (15@150-159) sending: 'x' (16@160-169) thing (16@160-169): 'x' (16@160-169) sending: ')' (17@170-179) line (15@150-179): '(' (15@150-159) thing (16@160-169) ')' (17@170-179) sending: '(' (18@180-189) sending: 'x' (19@190-199) thing (19@190-199): 'x' (19@190-199) sending: ')' (20@200-209) 200.200-209.208: memory exhausted Freeing nterm thing (19@190-199) Freeing nterm line (15@150-179) Freeing nterm line (12@120-149) Freeing nterm line (9@90-119) Freeing nterm line (6@60-89) Freeing nterm line (3@30-59) Freeing nterm line (0@0-29) Parsing FAILED (status 2). ./actions.at:1047: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 335. actions.at:1047: ok 336. actions.at:1048: testing Printers and Destructors with union ... ./actions.at:1048: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:1048: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./actions.at:1048: $PREPARSER ./input '(x)' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (0@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1048: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1048: $PREPARSER ./input '!' stderr: sending: '!' (0@0-9) sending: END (1@10-19) raise (4@9-9): %empty check-spontaneous-errors (5@9-19): error (@9-19) Freeing token END (1@10-19) Freeing nterm input (5@0-19) Successful parse. ./actions.at:1048: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1048: $PREPARSER ./input '!!!' stderr: sending: '!' (0@0-9) sending: '!' (1@10-19) sending: '!' (2@20-29) raise (5@10-29): ! (1@20-29) ! (2@20-29) check-spontaneous-errors (5@10-29): error (@10-29) sending: END (3@30-39) Freeing token END (3@30-39) Freeing nterm input (5@0-29) Successful parse. ./actions.at:1048: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1048: $PREPARSER ./input '(y)' stderr: sending: '(' (0@0-9) sending: 'y' (1@10-19) 10.10-19.18: syntax error, unexpected 'y', expecting 'x' Freeing token 'y' (1@10-19) sending: ')' (2@20-29) line (-1@0-29): '(' (0@0-9) error (@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (-1@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1048: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1048: $PREPARSER ./input '(xxxxx)(x)(x)y' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: 'x' (2@20-29) thing (2@20-29): 'x' (2@20-29) sending: 'x' (3@30-39) 30.30-39.38: syntax error, unexpected 'x', expecting ')' Freeing nterm thing (2@20-29) Freeing nterm thing (1@10-19) Freeing token 'x' (3@30-39) sending: 'x' (4@40-49) Freeing token 'x' (4@40-49) sending: 'x' (5@50-59) Freeing token 'x' (5@50-59) sending: ')' (6@60-69) line (-1@0-69): '(' (0@0-9) error (@10-59) ')' (6@60-69) sending: '(' (7@70-79) sending: 'x' (8@80-89) thing (8@80-89): 'x' (8@80-89) sending: ')' (9@90-99) line (7@70-99): '(' (7@70-79) thing (8@80-89) ')' (9@90-99) sending: '(' (10@100-109) sending: 'x' (11@110-119) thing (11@110-119): 'x' (11@110-119) sending: ')' (12@120-129) line (10@100-129): '(' (10@100-109) thing (11@110-119) ')' (12@120-129) sending: 'y' (13@130-139) input (0@129-129): /* Nothing */ input (2@100-129): line (10@100-129) input (0@129-129) input (2@70-129): line (7@70-99) input (2@100-129) input (2@0-129): line (-1@0-69) input (2@70-129) 130.130-139.138: syntax error, unexpected 'y', expecting END Freeing nterm input (2@0-129) Freeing token 'y' (13@130-139) Parsing FAILED. ./actions.at:1048: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./actions.at:1048: $PREPARSER ./input '(x)(x)x' In file included from /usr/include/c++/12/vector:70, from xc.hh:56, from xc.cc:80: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at xc.cc:1870:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at xc.cc:1706:66, inlined from 'void xc_::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, xc_::parser::glr_state*, xc_::parser::glr_state*, {anonymous}::rule_num)' at xc.cc:2139:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at xc.cc:1870:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at xc.cc:1692:58, inlined from 'void xc_::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const xc_::parser::value_type&, const xc_::parser::location_type&)' at xc.cc:2641:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at xc.cc:1870:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at xc.cc:1692:58, inlined from 'void xc_::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, xc_::parser::glr_state*, {anonymous}::rule_num)' at xc.cc:2624:58, inlined from 'YYRESULTTAG xc_::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at xc.cc:2612:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./headers.at:331: echo "xc" >>expout stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: '(' (3@30-39) sending: 'x' (4@40-49) thing (4@40-49): 'x' (4@40-49) sending: ')' (5@50-59) line (3@30-59): '(' (3@30-39) thing (4@40-49) ')' (5@50-59) sending: 'x' (6@60-69) thing (6@60-69): 'x' (6@60-69) sending: END (7@70-79) 70.70-79.78: syntax error, unexpected END, expecting 'x' Freeing nterm thing (6@60-69) Freeing nterm line (3@30-59) Freeing nterm line (0@0-29) Freeing token END (7@70-79) Parsing FAILED. ./actions.at:1048: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./headers.at:332: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -d -o xd.cc xd.y ./actions.at:1048: $PREPARSER ./input '(x)(x)(x)(x)(x)(x)(x)' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: '(' (3@30-39) sending: 'x' (4@40-49) thing (4@40-49): 'x' (4@40-49) sending: ')' (5@50-59) line (3@30-59): '(' (3@30-39) thing (4@40-49) ')' (5@50-59) sending: '(' (6@60-69) sending: 'x' (7@70-79) thing (7@70-79): 'x' (7@70-79) sending: ')' (8@80-89) line (6@60-89): '(' (6@60-69) thing (7@70-79) ')' (8@80-89) sending: '(' (9@90-99) sending: 'x' (10@100-109) thing (10@100-109): 'x' (10@100-109) sending: ')' (11@110-119) line (9@90-119): '(' (9@90-99) thing (10@100-109) ')' (11@110-119) sending: '(' (12@120-129) sending: 'x' (13@130-139) thing (13@130-139): 'x' (13@130-139) sending: ')' (14@140-149) line (12@120-149): '(' (12@120-129) thing (13@130-139) ')' (14@140-149) sending: '(' (15@150-159) sending: 'x' (16@160-169) thing (16@160-169): 'x' (16@160-169) sending: ')' (17@170-179) line (15@150-179): '(' (15@150-159) thing (16@160-169) ')' (17@170-179) sending: '(' (18@180-189) sending: 'x' (19@190-199) thing (19@190-199): 'x' (19@190-199) sending: ')' (20@200-209) 200.200-209.208: memory exhausted Freeing nterm thing (19@190-199) Freeing nterm line (15@150-179) Freeing nterm line (12@120-149) Freeing nterm line (9@90-119) Freeing nterm line (6@60-89) Freeing nterm line (3@30-59) Freeing nterm line (0@0-29) Parsing FAILED (status 2). ./actions.at:1048: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 336. actions.at:1048: ok 337. actions.at:1050: testing Printers and Destructors: %glr-parser ... ./actions.at:1050: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./headers.at:332: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS -c -o xd.o xd.cc ./actions.at:1050: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2378:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at input.cc:2214:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2644:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2378:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:2200:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&, const yy::parser::location_type&)' at input.cc:3075:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2378:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:2200:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:3058:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at input.cc:3046:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./actions.at:478: $PREPARSER ./input stderr: ./actions.at:478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 333. actions.at:478: ok 338. actions.at:1051: testing Printers and Destructors with union: %glr-parser ... ./actions.at:1051: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:1051: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./actions.at:1050: $PREPARSER ./input '(x)' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (0@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1050: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1050: $PREPARSER ./input '!' stderr: sending: '!' (0@0-9) sending: END (1@10-19) raise (4@9-9): %empty check-spontaneous-errors (5@9-19): error (@9-19) Freeing token END (1@10-19) Freeing nterm input (5@0-19) Successful parse. ./actions.at:1050: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1050: $PREPARSER ./input '!!!' stderr: sending: '!' (0@0-9) sending: '!' (1@10-19) sending: '!' (2@20-29) raise (5@10-29): ! (1@20-29) ! (2@20-29) check-spontaneous-errors (5@10-29): error (@10-29) sending: END (3@30-39) Freeing token END (3@30-39) Freeing nterm input (5@0-29) Successful parse. ./actions.at:1050: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1050: $PREPARSER ./input '(y)' stderr: sending: '(' (0@0-9) sending: 'y' (1@10-19) 10.10-19.18: syntax error, unexpected 'y', expecting 'x' Freeing token 'y' (1@10-19) sending: ')' (2@20-29) line (-1@0-29): '(' (0@0-9) error (@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (-1@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1050: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1050: $PREPARSER ./input '(xxxxx)(x)(x)y' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: 'x' (2@20-29) thing (2@20-29): 'x' (2@20-29) sending: 'x' (3@30-39) 30.30-39.38: syntax error, unexpected 'x', expecting ')' Freeing nterm thing (2@20-29) Freeing nterm thing (1@10-19) Freeing token 'x' (3@30-39) sending: 'x' (4@40-49) Freeing token 'x' (4@40-49) sending: 'x' (5@50-59) Freeing token 'x' (5@50-59) sending: ')' (6@60-69) line (-1@0-69): '(' (0@0-9) error (@10-59) ')' (6@60-69) sending: '(' (7@70-79) sending: 'x' (8@80-89) thing (8@80-89): 'x' (8@80-89) sending: ')' (9@90-99) line (7@70-99): '(' (7@70-79) thing (8@80-89) ')' (9@90-99) sending: '(' (10@100-109) sending: 'x' (11@110-119) thing (11@110-119): 'x' (11@110-119) sending: ')' (12@120-129) line (10@100-129): '(' (10@100-109) thing (11@110-119) ')' (12@120-129) sending: 'y' (13@130-139) input (0@129-129): /* Nothing */ input (2@100-129): line (10@100-129) input (0@129-129) input (2@70-129): line (7@70-99) input (2@100-129) input (2@0-129): line (-1@0-69) input (2@70-129) 130.130-139.138: syntax error, unexpected 'y', expecting END Freeing nterm input (2@0-129) Freeing token 'y' (13@130-139) Parsing FAILED. ./actions.at:1050: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1050: $PREPARSER ./input '(x)(x)x' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: '(' (3@30-39) sending: 'x' (4@40-49) thing (4@40-49): 'x' (4@40-49) sending: ')' (5@50-59) line (3@30-59): '(' (3@30-39) thing (4@40-49) ')' (5@50-59) sending: 'x' (6@60-69) thing (6@60-69): 'x' (6@60-69) sending: END (7@70-79) 70.70-79.78: syntax error, unexpected END, expecting 'x' Freeing nterm thing (6@60-69) Freeing nterm line (3@30-59) Freeing nterm line (0@0-29) Freeing token END (7@70-79) Parsing FAILED. ./actions.at:1050: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 337. actions.at:1050: ok 339. actions.at:1053: testing Printers and Destructors: %header lalr1.cc ... ./actions.at:1053: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./actions.at:1053: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./actions.at:1051: $PREPARSER ./input '(x)' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (0@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1051: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1051: $PREPARSER ./input '!' stderr: sending: '!' (0@0-9) sending: END (1@10-19) raise (4@9-9): %empty check-spontaneous-errors (5@9-19): error (@9-19) Freeing token END (1@10-19) Freeing nterm input (5@0-19) Successful parse. ./actions.at:1051: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1051: $PREPARSER ./input '!!!' stderr: sending: '!' (0@0-9) sending: '!' (1@10-19) sending: '!' (2@20-29) raise (5@10-29): ! (1@20-29) ! (2@20-29) check-spontaneous-errors (5@10-29): error (@10-29) sending: END (3@30-39) Freeing token END (3@30-39) Freeing nterm input (5@0-29) Successful parse. ./actions.at:1051: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1051: $PREPARSER ./input '(y)' stderr: sending: '(' (0@0-9) sending: 'y' (1@10-19) 10.10-19.18: syntax error, unexpected 'y', expecting 'x' Freeing token 'y' (1@10-19) sending: ')' (2@20-29) line (-1@0-29): '(' (0@0-9) error (@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (-1@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1051: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1051: $PREPARSER ./input '(xxxxx)(x)(x)y' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: 'x' (2@20-29) thing (2@20-29): 'x' (2@20-29) sending: 'x' (3@30-39) 30.30-39.38: syntax error, unexpected 'x', expecting ')' Freeing nterm thing (2@20-29) Freeing nterm thing (1@10-19) Freeing token 'x' (3@30-39) sending: 'x' (4@40-49) Freeing token 'x' (4@40-49) sending: 'x' (5@50-59) Freeing token 'x' (5@50-59) sending: ')' (6@60-69) line (-1@0-69): '(' (0@0-9) error (@10-59) ')' (6@60-69) sending: '(' (7@70-79) sending: 'x' (8@80-89) thing (8@80-89): 'x' (8@80-89) sending: ')' (9@90-99) line (7@70-99): '(' (7@70-79) thing (8@80-89) ')' (9@90-99) sending: '(' (10@100-109) sending: 'x' (11@110-119) thing (11@110-119): 'x' (11@110-119) sending: ')' (12@120-129) line (10@100-129): '(' (10@100-109) thing (11@110-119) ')' (12@120-129) sending: 'y' (13@130-139) input (0@129-129): /* Nothing */ input (2@100-129): line (10@100-129) input (0@129-129) input (2@70-129): line (7@70-99) input (2@100-129) input (2@0-129): line (-1@0-69) input (2@70-129) 130.130-139.138: syntax error, unexpected 'y', expecting END Freeing nterm input (2@0-129) Freeing token 'y' (13@130-139) Parsing FAILED. ./actions.at:1051: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1051: $PREPARSER ./input '(x)(x)x' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: '(' (3@30-39) sending: 'x' (4@40-49) thing (4@40-49): 'x' (4@40-49) sending: ')' (5@50-59) line (3@30-59): '(' (3@30-39) thing (4@40-49) ')' (5@50-59) sending: 'x' (6@60-69) thing (6@60-69): 'x' (6@60-69) sending: END (7@70-79) 70.70-79.78: syntax error, unexpected END, expecting 'x' Freeing nterm thing (6@60-69) Freeing nterm line (3@30-59) Freeing nterm line (0@0-29) Freeing token END (7@70-79) Parsing FAILED. ./actions.at:1051: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 338. actions.at:1051: ok 340. actions.at:1054: testing Printers and Destructors with union: %header lalr1.cc ... ./actions.at:1054: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./actions.at:1054: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./actions.at:1053: $PREPARSER ./input '(x)' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (0@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1053: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1053: $PREPARSER ./input '!' stderr: sending: '!' (0@0-9) sending: END (1@10-19) raise (4@9-9): %empty check-spontaneous-errors (5@9-19): error (@9-19) Freeing token END (1@10-19) Freeing nterm input (5@0-19) Successful parse. ./actions.at:1053: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1053: $PREPARSER ./input '!!!' stderr: sending: '!' (0@0-9) sending: '!' (1@10-19) sending: '!' (2@20-29) raise (5@10-29): ! (1@20-29) ! (2@20-29) check-spontaneous-errors (5@10-29): error (@10-29) sending: END (3@30-39) Freeing token END (3@30-39) Freeing nterm input (5@0-29) Successful parse. ./actions.at:1053: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1053: $PREPARSER ./input '(y)' stderr: sending: '(' (0@0-9) sending: 'y' (1@10-19) 10.10-19.18: syntax error, unexpected 'y', expecting 'x' Freeing token 'y' (1@10-19) sending: ')' (2@20-29) line (-1@0-29): '(' (0@0-9) error (@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (-1@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1053: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1053: $PREPARSER ./input '(xxxxx)(x)(x)y' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: 'x' (2@20-29) thing (2@20-29): 'x' (2@20-29) sending: 'x' (3@30-39) 30.30-39.38: syntax error, unexpected 'x', expecting ')' Freeing nterm thing (2@20-29) Freeing nterm thing (1@10-19) Freeing token 'x' (3@30-39) sending: 'x' (4@40-49) Freeing token 'x' (4@40-49) sending: 'x' (5@50-59) Freeing token 'x' (5@50-59) sending: ')' (6@60-69) line (-1@0-69): '(' (0@0-9) error (@10-59) ')' (6@60-69) sending: '(' (7@70-79) sending: 'x' (8@80-89) thing (8@80-89): 'x' (8@80-89) sending: ')' (9@90-99) line (7@70-99): '(' (7@70-79) thing (8@80-89) ')' (9@90-99) sending: '(' (10@100-109) sending: 'x' (11@110-119) thing (11@110-119): 'x' (11@110-119) sending: ')' (12@120-129) line (10@100-129): '(' (10@100-109) thing (11@110-119) ')' (12@120-129) sending: 'y' (13@130-139) input (0@129-129): /* Nothing */ input (2@100-129): line (10@100-129) input (0@129-129) input (2@70-129): line (7@70-99) input (2@100-129) input (2@0-129): line (-1@0-69) input (2@70-129) 130.130-139.138: syntax error, unexpected 'y', expecting END Freeing nterm input (2@0-129) Freeing token 'y' (13@130-139) Parsing FAILED. ./actions.at:1053: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1053: $PREPARSER ./input '(x)(x)x' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: '(' (3@30-39) sending: 'x' (4@40-49) thing (4@40-49): 'x' (4@40-49) sending: ')' (5@50-59) line (3@30-59): '(' (3@30-39) thing (4@40-49) ')' (5@50-59) sending: 'x' (6@60-69) thing (6@60-69): 'x' (6@60-69) sending: END (7@70-79) 70.70-79.78: syntax error, unexpected END, expecting 'x' Freeing nterm thing (6@60-69) Freeing nterm line (3@30-59) Freeing nterm line (0@0-29) Freeing token END (7@70-79) Parsing FAILED. ./actions.at:1053: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 339. actions.at:1053: ok 341. actions.at:1056: testing Printers and Destructors: %header glr.cc ... ./actions.at:1056: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./actions.at:1056: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from xd.hh:51, from xd.cc:80: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at xd.cc:1870:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at xd.cc:1706:66, inlined from 'void xd_::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, xd_::parser::glr_state*, xd_::parser::glr_state*, {anonymous}::rule_num)' at xd.cc:2139:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at xd.cc:1870:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at xd.cc:1692:58, inlined from 'void xd_::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const xd_::parser::value_type&, const xd_::parser::location_type&)' at xd.cc:2641:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at xd.cc:1870:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at xd.cc:1692:58, inlined from 'void xd_::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, xd_::parser::glr_state*, {anonymous}::rule_num)' at xd.cc:2624:58, inlined from 'YYRESULTTAG xd_::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at xd.cc:2612:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./headers.at:332: echo "xd" >>expout ./headers.at:342: "$PERL" -n -0777 -e ' # Ignore comments. s{/\*.*?\*/}{}gs; s{//.*}{}g; # Ignore warnings. s{# *pragma .* message ".*"}{}g; s{\b((defined|if)\ YYDEBUG |YYChar # Template parameter. |YYNTOKENS # This is actually scoped in a C++ class. |YYPUSH_MORE(?:_DEFINED)? |S_(YY(ACCEPT|EMPTY|EOF|error|UNDEF)) # These guys are scoped. |YY(?:_REINTERPRET)?_CAST |YY_ATTRIBUTE(?:_PURE|_UNUSED) |YY_CONSTEXPR |YY_COPY |YY_CPLUSPLUS |YY_IGNORE_(?:MAYBE_UNINITIALIZED|USELESS_CAST)_(?:BEGIN|END) |YY_INITIAL_VALUE |YY_MOVE |YY_MOVE_OR_COPY |YY_MOVE_REF |YY_NOEXCEPT |YY_NOTHROW |YY_NULLPTR |YY_RVREF |YY_USE |YY_\w+_INCLUDED # Header guards. |FILE\ \*yyo # Function argument. |const\ yylocp # Function argument. )\b}{}gx; while (/^(.*YY.*)$/gm) { print "$ARGV: invalid exported YY: $1\n"; } if ($ARGV =~ /\.h$/) { while (/^(.*yy.*)$/gm) { print "$ARGV: invalid exported yy: $1\n"; } } ' -- *.hh *.h ./headers.at:387: $CC $CFLAGS $CPPFLAGS -c -o c-only.o c-only.c stderr: stdout: ./headers.at:387: $CXX $CPPFLAGS $CXXFLAGS -c -o cxx-only.o cxx-only.cc stderr: stdout: ./headers.at:387: $CXX $CXXFLAGS $CPPFLAGS $LDFLAGS c-only.o cxx-only.o -o c-and-cxx || exit 77 stderr: stdout: ./headers.at:387: $PREPARSER ./c-and-cxx stderr: ./headers.at:387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./headers.at:392: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o parser x[1-9a-d].o -DCC_IS_CXX=$CC_IS_CXX main.cc $LIBS stderr: stdout: ./actions.at:1054: $PREPARSER ./input '(x)' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (0@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1054: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1054: $PREPARSER ./input '!' stderr: sending: '!' (0@0-9) sending: END (1@10-19) raise (4@9-9): %empty check-spontaneous-errors (5@9-19): error (@9-19) Freeing token END (1@10-19) Freeing nterm input (5@0-19) Successful parse. ./actions.at:1054: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1054: $PREPARSER ./input '!!!' stderr: sending: '!' (0@0-9) sending: '!' (1@10-19) sending: '!' (2@20-29) raise (5@10-29): ! (1@20-29) ! (2@20-29) check-spontaneous-errors (5@10-29): error (@10-29) sending: END (3@30-39) Freeing token END (3@30-39) Freeing nterm input (5@0-29) Successful parse. ./actions.at:1054: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1054: $PREPARSER ./input '(y)' stderr: sending: '(' (0@0-9) sending: 'y' (1@10-19) 10.10-19.18: syntax error, unexpected 'y', expecting 'x' Freeing token 'y' (1@10-19) sending: ')' (2@20-29) line (-1@0-29): '(' (0@0-9) error (@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (-1@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1054: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1054: $PREPARSER ./input '(xxxxx)(x)(x)y' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: 'x' (2@20-29) thing (2@20-29): 'x' (2@20-29) sending: 'x' (3@30-39) 30.30-39.38: syntax error, unexpected 'x', expecting ')' Freeing nterm thing (2@20-29) Freeing nterm thing (1@10-19) Freeing token 'x' (3@30-39) sending: 'x' (4@40-49) Freeing token 'x' (4@40-49) sending: 'x' (5@50-59) Freeing token 'x' (5@50-59) sending: ')' (6@60-69) line (-1@0-69): '(' (0@0-9) error (@10-59) ')' (6@60-69) sending: '(' (7@70-79) sending: 'x' (8@80-89) thing (8@80-89): 'x' (8@80-89) sending: ')' (9@90-99) line (7@70-99): '(' (7@70-79) thing (8@80-89) ')' (9@90-99) sending: '(' (10@100-109) sending: 'x' (11@110-119) thing (11@110-119): 'x' (11@110-119) sending: ')' (12@120-129) line (10@100-129): '(' (10@100-109) thing (11@110-119) ')' (12@120-129) sending: 'y' (13@130-139) input (0@129-129): /* Nothing */ input (2@100-129): line (10@100-129) input (0@129-129) input (2@70-129): line (7@70-99) input (2@100-129) input (2@0-129): line (-1@0-69) input (2@70-129) 130.130-139.138: syntax error, unexpected 'y', expecting END Freeing nterm input (2@0-129) Freeing token 'y' (13@130-139) Parsing FAILED. ./actions.at:1054: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1054: $PREPARSER ./input '(x)(x)x' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: '(' (3@30-39) sending: 'x' (4@40-49) thing (4@40-49): 'x' (4@40-49) sending: ')' (5@50-59) line (3@30-59): '(' (3@30-39) thing (4@40-49) ')' (5@50-59) sending: 'x' (6@60-69) thing (6@60-69): 'x' (6@60-69) sending: END (7@70-79) 70.70-79.78: syntax error, unexpected END, expecting 'x' Freeing nterm thing (6@60-69) Freeing nterm line (3@30-59) Freeing nterm line (0@0-29) Freeing token END (7@70-79) Parsing FAILED. ./actions.at:1054: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 340. actions.at:1054: ok 342. actions.at:1057: testing Printers and Destructors with union: %header glr.cc ... ./actions.at:1057: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./actions.at:1057: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./headers.at:394: $PREPARSER ./parser stderr: ./headers.at:394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 310. headers.at:199: ok 343. actions.at:1059: testing Printers and Destructors: %header glr2.cc ... ./actions.at:1059: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./actions.at:1059: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./actions.at:1056: $PREPARSER ./input '(x)' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (0@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1056: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1056: $PREPARSER ./input '!' stderr: sending: '!' (0@0-9) sending: END (1@10-19) raise (4@9-9): %empty check-spontaneous-errors (5@9-19): error (@9-19) Freeing token END (1@10-19) Freeing nterm input (5@0-19) Successful parse. ./actions.at:1056: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1056: $PREPARSER ./input '!!!' stderr: sending: '!' (0@0-9) sending: '!' (1@10-19) sending: '!' (2@20-29) raise (5@10-29): ! (1@20-29) ! (2@20-29) check-spontaneous-errors (5@10-29): error (@10-29) sending: END (3@30-39) Freeing token END (3@30-39) Freeing nterm input (5@0-29) Successful parse. ./actions.at:1056: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1056: $PREPARSER ./input '(y)' stderr: sending: '(' (0@0-9) sending: 'y' (1@10-19) 10.10-19.18: syntax error, unexpected 'y', expecting 'x' Freeing token 'y' (1@10-19) sending: ')' (2@20-29) line (-1@0-29): '(' (0@0-9) error (@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (-1@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1056: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1056: $PREPARSER ./input '(xxxxx)(x)(x)y' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: 'x' (2@20-29) thing (2@20-29): 'x' (2@20-29) sending: 'x' (3@30-39) 30.30-39.38: syntax error, unexpected 'x', expecting ')' Freeing nterm thing (2@20-29) Freeing nterm thing (1@10-19) Freeing token 'x' (3@30-39) sending: 'x' (4@40-49) Freeing token 'x' (4@40-49) sending: 'x' (5@50-59) Freeing token 'x' (5@50-59) sending: ')' (6@60-69) line (-1@0-69): '(' (0@0-9) error (@10-59) ')' (6@60-69) sending: '(' (7@70-79) sending: 'x' (8@80-89) thing (8@80-89): 'x' (8@80-89) sending: ')' (9@90-99) line (7@70-99): '(' (7@70-79) thing (8@80-89) ')' (9@90-99) sending: '(' (10@100-109) sending: 'x' (11@110-119) thing (11@110-119): 'x' (11@110-119) sending: ')' (12@120-129) line (10@100-129): '(' (10@100-109) thing (11@110-119) ')' (12@120-129) sending: 'y' (13@130-139) input (0@129-129): /* Nothing */ input (2@100-129): line (10@100-129) input (0@129-129) input (2@70-129): line (7@70-99) input (2@100-129) input (2@0-129): line (-1@0-69) input (2@70-129) 130.130-139.138: syntax error, unexpected 'y', expecting END Freeing nterm input (2@0-129) Freeing token 'y' (13@130-139) Parsing FAILED. ./actions.at:1056: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1056: $PREPARSER ./input '(x)(x)x' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: '(' (3@30-39) sending: 'x' (4@40-49) thing (4@40-49): 'x' (4@40-49) sending: ')' (5@50-59) line (3@30-59): '(' (3@30-39) thing (4@40-49) ')' (5@50-59) sending: 'x' (6@60-69) thing (6@60-69): 'x' (6@60-69) sending: END (7@70-79) 70.70-79.78: syntax error, unexpected END, expecting 'x' Freeing nterm thing (6@60-69) Freeing nterm line (3@30-59) Freeing nterm line (0@0-29) Freeing token END (7@70-79) Parsing FAILED. ./actions.at:1056: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 341. actions.at:1056: ok 344. actions.at:1060: testing Printers and Destructors with union: %header glr2.cc ... ./actions.at:1060: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./actions.at:1060: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./actions.at:1057: $PREPARSER ./input '(x)' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (0@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1057: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1057: $PREPARSER ./input '!' stderr: sending: '!' (0@0-9) sending: END (1@10-19) raise (4@9-9): %empty check-spontaneous-errors (5@9-19): error (@9-19) Freeing token END (1@10-19) Freeing nterm input (5@0-19) Successful parse. ./actions.at:1057: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1057: $PREPARSER ./input '!!!' stderr: sending: '!' (0@0-9) sending: '!' (1@10-19) sending: '!' (2@20-29) raise (5@10-29): ! (1@20-29) ! (2@20-29) check-spontaneous-errors (5@10-29): error (@10-29) sending: END (3@30-39) Freeing token END (3@30-39) Freeing nterm input (5@0-29) Successful parse. ./actions.at:1057: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1057: $PREPARSER ./input '(y)' stderr: sending: '(' (0@0-9) sending: 'y' (1@10-19) 10.10-19.18: syntax error, unexpected 'y', expecting 'x' Freeing token 'y' (1@10-19) sending: ')' (2@20-29) line (-1@0-29): '(' (0@0-9) error (@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (-1@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1057: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1057: $PREPARSER ./input '(xxxxx)(x)(x)y' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: 'x' (2@20-29) thing (2@20-29): 'x' (2@20-29) sending: 'x' (3@30-39) 30.30-39.38: syntax error, unexpected 'x', expecting ')' Freeing nterm thing (2@20-29) Freeing nterm thing (1@10-19) Freeing token 'x' (3@30-39) sending: 'x' (4@40-49) Freeing token 'x' (4@40-49) sending: 'x' (5@50-59) Freeing token 'x' (5@50-59) sending: ')' (6@60-69) line (-1@0-69): '(' (0@0-9) error (@10-59) ')' (6@60-69) sending: '(' (7@70-79) sending: 'x' (8@80-89) thing (8@80-89): 'x' (8@80-89) sending: ')' (9@90-99) line (7@70-99): '(' (7@70-79) thing (8@80-89) ')' (9@90-99) sending: '(' (10@100-109) sending: 'x' (11@110-119) thing (11@110-119): 'x' (11@110-119) sending: ')' (12@120-129) line (10@100-129): '(' (10@100-109) thing (11@110-119) ')' (12@120-129) sending: 'y' (13@130-139) input (0@129-129): /* Nothing */ input (2@100-129): line (10@100-129) input (0@129-129) input (2@70-129): line (7@70-99) input (2@100-129) input (2@0-129): line (-1@0-69) input (2@70-129) 130.130-139.138: syntax error, unexpected 'y', expecting END Freeing nterm input (2@0-129) Freeing token 'y' (13@130-139) Parsing FAILED. ./actions.at:1057: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1057: $PREPARSER ./input '(x)(x)x' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: '(' (3@30-39) sending: 'x' (4@40-49) thing (4@40-49): 'x' (4@40-49) sending: ')' (5@50-59) line (3@30-59): '(' (3@30-39) thing (4@40-49) ')' (5@50-59) sending: 'x' (6@60-69) thing (6@60-69): 'x' (6@60-69) sending: END (7@70-79) 70.70-79.78: syntax error, unexpected END, expecting 'x' Freeing nterm thing (6@60-69) Freeing nterm line (3@30-59) Freeing nterm line (0@0-29) Freeing token END (7@70-79) Parsing FAILED. ./actions.at:1057: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 342. actions.at:1057: ok 345. actions.at:1071: testing Default tagless %printer and %destructor ... ./actions.at:1116: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:1116: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y -Werror stderr: input.y:30.3-5: error: useless %destructor for type <*> [-Werror=other] input.y:30.3-5: error: useless %printer for type <*> [-Werror=other] ./actions.at:1116: sed 's,.*/$,,' stderr 1>&2 ./actions.at:1116: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y --warnings=error ./actions.at:1116: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y -Wnone,none -Werror --trace=none ./actions.at:1116: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y --warnings=none -Werror --trace=none ./actions.at:1120: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./actions.at:1121: $PREPARSER ./input --debug stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token 'a' (1.1: <> printer for 'a' @ 1) Shifting token 'a' (1.1: <> printer for 'a' @ 1) Entering state 1 Stack now 0 1 Reading a token Next token is token 'b' (1.2: 'b'/'c' printer for 'b' @ 2) Shifting token 'b' (1.2: 'b'/'c' printer for 'b' @ 2) Entering state 3 Stack now 0 1 3 Reading a token Next token is token 'c' (1.3: 'b'/'c' printer for 'c' @ 3) Shifting token 'c' (1.3: 'b'/'c' printer for 'c' @ 3) Entering state 5 Stack now 0 1 3 5 Reading a token Next token is token 'd' (1.4: <> printer for 'd' @ 4) Shifting token 'd' (1.4: <> printer for 'd' @ 4) Entering state 6 Stack now 0 1 3 5 6 Reading a token Now at end of input. 1.5: syntax error, unexpected end of file, expecting 'e' Error: popping token 'd' (1.4: <> printer for 'd' @ 4) Stack now 0 1 3 5 Error: popping token 'c' (1.3: 'b'/'c' printer for 'c' @ 3) Stack now 0 1 3 Error: popping token 'b' (1.2: 'b'/'c' printer for 'b' @ 2) Stack now 0 1 Error: popping token 'a' (1.1: <> printer for 'a' @ 1) Stack now 0 Cleanup: discarding lookahead token "end of file" (1.5: ) Stack now 0 ./actions.at:1121: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 345. actions.at:1071: ok 346. actions.at:1174: testing Default tagged and per-type %printer and %destructor ... ./actions.at:1233: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:1233: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y -Werror stderr: input.y:22.3-4: error: useless %destructor for type <> [-Werror=other] input.y:22.3-4: error: useless %printer for type <> [-Werror=other] ./actions.at:1233: sed 's,.*/$,,' stderr 1>&2 ./actions.at:1233: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y --warnings=error ./actions.at:1233: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y -Wnone,none -Werror --trace=none ./actions.at:1233: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y --warnings=none -Werror --trace=none ./actions.at:1237: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./actions.at:1238: $PREPARSER ./input --debug stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token 'a' (<*>//e printer) Shifting token 'a' (<*>//e printer) Entering state 1 Stack now 0 1 Reading a token Next token is token 'b' ( printer) Shifting token 'b' ( printer) Entering state 3 Stack now 0 1 3 Reading a token Next token is token 'c' ('c' printer) Shifting token 'c' ('c' printer) Entering state 5 Stack now 0 1 3 5 Reading a token Next token is token 'd' ('d' printer) Shifting token 'd' ('d' printer) Entering state 6 Stack now 0 1 3 5 6 Reading a token Next token is token 'e' (<*>//e printer) Shifting token 'e' (<*>//e printer) Entering state 7 Stack now 0 1 3 5 6 7 Reading a token Next token is token 'f' (<*>//e printer) Shifting token 'f' (<*>//e printer) Entering state 8 Stack now 0 1 3 5 6 7 8 Reading a token Now at end of input. syntax error, unexpected end of file, expecting 'g' Error: popping token 'f' (<*>//e printer) Stack now 0 1 3 5 6 7 Error: popping token 'e' (<*>//e printer) Stack now 0 1 3 5 6 Error: popping token 'd' ('d' printer) Stack now 0 1 3 5 Error: popping token 'c' ('c' printer) Stack now 0 1 3 Error: popping token 'b' ( printer) Stack now 0 1 Error: popping token 'a' (<*>//e printer) Stack now 0 Cleanup: discarding lookahead token "end of file" () Stack now 0 ./actions.at:1238: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 346. actions.at:1174: ok 347. actions.at:1307: testing Default %printer and %destructor for user-defined end token ... ./actions.at:1416: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input0.c input0.y ./actions.at:1416: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input0.c input0.y -Werror stderr: input0.y:30.3-5: error: useless %destructor for type <*> [-Werror=other] input0.y:30.3-5: error: useless %printer for type <*> [-Werror=other] ./actions.at:1416: sed 's,.*/$,,' stderr 1>&2 ./actions.at:1416: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input0.c input0.y --warnings=error ./actions.at:1416: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input0.c input0.y -Wnone,none -Werror --trace=none ./actions.at:1416: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input0.c input0.y --warnings=none -Werror --trace=none ./actions.at:1416: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input0 input0.c $LIBS stderr: stdout: ./actions.at:1416: $PREPARSER ./input0 --debug stderr: Starting parse Entering state 0 Stack now 0 Reducing stack by rule 1 (line 49): -> $$ = nterm start (1.1: <> for 'S' @ 1) Entering state 1 Stack now 0 1 Reading a token Now at end of input. Shifting token END (1.1: <> for 'E' @ 1) Entering state 2 Stack now 0 1 2 Stack now 0 1 2 Cleanup: popping token END (1.1: <> for 'E' @ 1) Cleanup: popping nterm start (1.1: <> for 'S' @ 1) ./actions.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1417: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input1.c input1.y ./actions.at:1417: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input1.c input1.y -Werror stderr: input1.y:30.3-4: error: useless %destructor for type <> [-Werror=other] input1.y:30.3-4: error: useless %printer for type <> [-Werror=other] ./actions.at:1417: sed 's,.*/$,,' stderr 1>&2 ./actions.at:1417: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input1.c input1.y --warnings=error ./actions.at:1417: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input1.c input1.y -Wnone,none -Werror --trace=none ./actions.at:1417: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input1.c input1.y --warnings=none -Werror --trace=none ./actions.at:1417: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input1 input1.c $LIBS stderr: stdout: ./actions.at:1417: $PREPARSER ./input1 --debug stderr: Starting parse Entering state 0 Stack now 0 Reducing stack by rule 1 (line 49): -> $$ = nterm start (1.1: <*> for 'S' @ 1) Entering state 1 Stack now 0 1 Reading a token Now at end of input. Shifting token END (1.1: <*> for 'E' @ 1) Entering state 2 Stack now 0 1 2 Stack now 0 1 2 Cleanup: popping token END (1.1: <*> for 'E' @ 1) Cleanup: popping nterm start (1.1: <*> for 'S' @ 1) ./actions.at:1417: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 347. actions.at:1307: ok 348. actions.at:1429: testing Default %printer and %destructor are not for error or $undefined ... ./actions.at:1474: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:1474: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y -Werror stderr: input.y:23.6-8: error: useless %destructor for type <*> [-Werror=other] input.y:23.6-8: error: useless %printer for type <*> [-Werror=other] ./actions.at:1474: sed 's,.*/$,,' stderr 1>&2 ./actions.at:1474: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y --warnings=error ./actions.at:1474: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y -Wnone,none -Werror --trace=none ./actions.at:1474: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y --warnings=none -Werror --trace=none stderr: In file included from /usr/include/c++/12/vector:70, from input.hh:70, from input.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1787:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at input.cc:1623:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2056:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1787:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1609:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&, const yy::parser::location_type&)' at input.cc:2620:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1787:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1609:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2603:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at input.cc:2591:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./actions.at:1059: $PREPARSER ./input '(x)' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (0@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1059: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1059: $PREPARSER ./input '!' stderr: sending: '!' (0@0-9) sending: END (1@10-19) raise (4@9-9): %empty check-spontaneous-errors (5@9-19): error (@9-19) Freeing token END (1@10-19) Freeing nterm input (5@0-19) Successful parse. ./actions.at:1059: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1059: $PREPARSER ./input '!!!' stderr: sending: '!' (0@0-9) sending: '!' (1@10-19) sending: '!' (2@20-29) raise (5@10-29): ! (1@20-29) ! (2@20-29) check-spontaneous-errors (5@10-29): error (@10-29) sending: END (3@30-39) Freeing token END (3@30-39) Freeing nterm input (5@0-29) Successful parse. ./actions.at:1059: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1059: $PREPARSER ./input '(y)' stderr: sending: '(' (0@0-9) sending: 'y' (1@10-19) 10.10-19.18: syntax error, unexpected 'y', expecting 'x' Freeing token 'y' (1@10-19) sending: ')' (2@20-29) line (-1@0-29): '(' (0@0-9) error (@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (-1@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1059: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1059: $PREPARSER ./input '(xxxxx)(x)(x)y' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: 'x' (2@20-29) thing (2@20-29): 'x' (2@20-29) sending: 'x' (3@30-39) 30.30-39.38: syntax error, unexpected 'x', expecting ')' Freeing nterm thing (2@20-29) Freeing nterm thing (1@10-19) Freeing token 'x' (3@30-39) sending: 'x' (4@40-49) Freeing token 'x' (4@40-49) sending: 'x' (5@50-59) Freeing token 'x' (5@50-59) sending: ')' (6@60-69) line (-1@0-69): '(' (0@0-9) error (@10-59) ')' (6@60-69) sending: '(' (7@70-79) sending: 'x' (8@80-89) thing (8@80-89): 'x' (8@80-89) sending: ')' (9@90-99) line (7@70-99): '(' (7@70-79) thing (8@80-89) ')' (9@90-99) sending: '(' (10@100-109) sending: 'x' (11@110-119) thing (11@110-119): 'x' (11@110-119) sending: ')' (12@120-129) line (10@100-129): '(' (10@100-109) thing (11@110-119) ')' (12@120-129) sending: 'y' (13@130-139) input (0@129-129): /* Nothing */ input (2@100-129): line (10@100-129) input (0@129-129) input (2@70-129): line (7@70-99) input (2@100-129) input (2@0-129): line (-1@0-69) input (2@70-129) 130.130-139.138: syntax error, unexpected 'y', expecting END Freeing nterm input (2@0-129) Freeing token 'y' (13@130-139) Parsing FAILED. ./actions.at:1059: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1059: $PREPARSER ./input '(x)(x)x' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: '(' (3@30-39) sending: 'x' (4@40-49) thing (4@40-49): 'x' (4@40-49) sending: ')' (5@50-59) line (3@30-59): '(' (3@30-39) thing (4@40-49) ')' (5@50-59) sending: 'x' (6@60-69) thing (6@60-69): 'x' (6@60-69) sending: END (7@70-79) 70.70-79.78: syntax error, unexpected END, expecting 'x' Freeing nterm thing (6@60-69) Freeing nterm line (3@30-59) Freeing nterm line (0@0-29) Freeing token END (7@70-79) Parsing FAILED. ./actions.at:1059: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1478: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS 343. actions.at:1059: ok 349. actions.at:1532: testing Default %printer and %destructor are not for $accept ... ./actions.at:1582: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:1582: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y -Werror stderr: input.y:24.3-4: error: useless %destructor for type <> [-Werror=other] input.y:24.3-4: error: useless %printer for type <> [-Werror=other] ./actions.at:1582: sed 's,.*/$,,' stderr 1>&2 ./actions.at:1582: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y --warnings=error ./actions.at:1582: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y -Wnone,none -Werror --trace=none ./actions.at:1582: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y --warnings=none -Werror --trace=none ./actions.at:1586: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./actions.at:1479: $PREPARSER ./input --debug stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token 'a' ('a') Shifting token 'a' ('a') Entering state 1 Stack now 0 1 Reading a token Next token is token 'b' ('b') syntax error Shifting token error () Entering state 3 Stack now 0 1 3 Next token is token 'b' ('b') Shifting token 'b' ('b') Entering state 5 Stack now 0 1 3 5 Reading a token Next token is token "invalid token" () Error: popping token 'b' ('b') DESTROY 'b' Stack now 0 1 3 Error: popping token error () Stack now 0 1 Shifting token error () Entering state 3 Stack now 0 1 3 Next token is token "invalid token" () Error: discarding token "invalid token" () Error: popping token error () Stack now 0 1 Shifting token error () Entering state 3 Stack now 0 1 3 Reading a token Now at end of input. Cleanup: discarding lookahead token "end of file" () Stack now 0 1 3 Cleanup: popping token error () Cleanup: popping token 'a' ('a') DESTROY 'a' ./actions.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 348. actions.at:1429: ok 350. actions.at:1596: testing Default %printer and %destructor for midrule values ... ./actions.at:1634: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:1634: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y -Werror stderr: input.y:24.57-59: error: useless %destructor for type <*> [-Werror=other] input.y:24.57-59: error: useless %printer for type <*> [-Werror=other] input.y:33.3-23: error: unset value: $$ [-Werror=other] input.y:32.3-23: error: unused value: $3 [-Werror=other] ./actions.at:1634: sed 's,.*/$,,' stderr 1>&2 ./actions.at:1634: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y --warnings=error stderr: stdout: 349. actions.at:1532: ok 351. actions.at:1743: testing @$ in %initial-action implies %locations ... ./actions.at:1743: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:1634: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y -Wnone,none -Werror --trace=none ./actions.at:1743: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./actions.at:1634: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o input.c input.y --warnings=none -Werror --trace=none ./actions.at:1641: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -o input.c input.y ./actions.at:1641: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y -Werror stderr: input.y:24.57-59: error: useless %destructor for type <*> [-Werror=other] 24 | %printer { #error "<*> printer should not be used" } <*> | ^~~ input.y:24.57-59: error: useless %printer for type <*> [-Werror=other] 24 | %printer { #error "<*> printer should not be used" } <*> | ^~~ input.y:33.3-23: error: unset value: $$ [-Werror=other] 33 | { @$ = 4; } // Only used. | ^~~~~~~~~~~~~~~~~~~~~ input.y:32.3-23: error: unused value: $3 [-Werror=other] 32 | { USE ($$); @$ = 3; } // Only set. | ^~~~~~~~~~~~~~~~~~~~~ ./actions.at:1641: sed 's,.*/$,,' stderr 1>&2 ./actions.at:1641: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y --warnings=error ./actions.at:1641: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y -Wnone,none -Werror --trace=none ./actions.at:1641: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y --warnings=none -Werror --trace=none stderr: stdout: 351. actions.at:1743: ok 352. actions.at:1744: testing @$ in %destructor implies %locations ... ./actions.at:1744: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:1656: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./actions.at:1744: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.hh:70, from input.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1779:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at input.cc:1615:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2048:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1779:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1601:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&, const yy::parser::location_type&)' at input.cc:2612:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1779:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1601:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2595:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at input.cc:2583:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./actions.at:1060: $PREPARSER ./input '(x)' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (0@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1060: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1060: $PREPARSER ./input '!' stderr: sending: '!' (0@0-9) sending: END (1@10-19) raise (4@9-9): %empty check-spontaneous-errors (5@9-19): error (@9-19) Freeing token END (1@10-19) Freeing nterm input (5@0-19) Successful parse. ./actions.at:1060: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1060: $PREPARSER ./input '!!!' stderr: sending: '!' (0@0-9) sending: '!' (1@10-19) sending: '!' (2@20-29) raise (5@10-29): ! (1@20-29) ! (2@20-29) check-spontaneous-errors (5@10-29): error (@10-29) sending: END (3@30-39) Freeing token END (3@30-39) Freeing nterm input (5@0-29) Successful parse. ./actions.at:1060: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1060: $PREPARSER ./input '(y)' stderr: sending: '(' (0@0-9) sending: 'y' (1@10-19) 10.10-19.18: syntax error, unexpected 'y', expecting 'x' Freeing token 'y' (1@10-19) sending: ')' (2@20-29) line (-1@0-29): '(' (0@0-9) error (@10-19) ')' (2@20-29) sending: END (3@30-39) input (0@29-29): /* Nothing */ input (2@0-29): line (-1@0-29) input (0@29-29) Freeing token END (3@30-39) Freeing nterm input (2@0-29) Successful parse. ./actions.at:1060: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1060: $PREPARSER ./input '(xxxxx)(x)(x)y' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: 'x' (2@20-29) thing (2@20-29): 'x' (2@20-29) sending: 'x' (3@30-39) 30.30-39.38: syntax error, unexpected 'x', expecting ')' Freeing nterm thing (2@20-29) Freeing nterm thing (1@10-19) Freeing token 'x' (3@30-39) sending: 'x' (4@40-49) Freeing token 'x' (4@40-49) sending: 'x' (5@50-59) Freeing token 'x' (5@50-59) sending: ')' (6@60-69) line (-1@0-69): '(' (0@0-9) error (@10-59) ')' (6@60-69) sending: '(' (7@70-79) sending: 'x' (8@80-89) thing (8@80-89): 'x' (8@80-89) sending: ')' (9@90-99) line (7@70-99): '(' (7@70-79) thing (8@80-89) ')' (9@90-99) sending: '(' (10@100-109) sending: 'x' (11@110-119) thing (11@110-119): 'x' (11@110-119) sending: ')' (12@120-129) line (10@100-129): '(' (10@100-109) thing (11@110-119) ')' (12@120-129) sending: 'y' (13@130-139) input (0@129-129): /* Nothing */ input (2@100-129): line (10@100-129) input (0@129-129) input (2@70-129): line (7@70-99) input (2@100-129) input (2@0-129): line (-1@0-69) input (2@70-129) 130.130-139.138: syntax error, unexpected 'y', expecting END Freeing nterm input (2@0-129) Freeing token 'y' (13@130-139) Parsing FAILED. ./actions.at:1060: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./actions.at:1060: $PREPARSER ./input '(x)(x)x' stderr: sending: '(' (0@0-9) sending: 'x' (1@10-19) thing (1@10-19): 'x' (1@10-19) sending: ')' (2@20-29) line (0@0-29): '(' (0@0-9) thing (1@10-19) ')' (2@20-29) sending: '(' (3@30-39) sending: 'x' (4@40-49) thing (4@40-49): 'x' (4@40-49) sending: ')' (5@50-59) line (3@30-59): '(' (3@30-39) thing (4@40-49) ')' (5@50-59) sending: 'x' (6@60-69) thing (6@60-69): 'x' (6@60-69) sending: END (7@70-79) 70.70-79.78: syntax error, unexpected END, expecting 'x' Freeing nterm thing (6@60-69) Freeing nterm line (3@30-59) Freeing nterm line (0@0-29) Freeing token END (7@70-79) Parsing FAILED. ./actions.at:1060: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 344. actions.at:1060: ok 353. actions.at:1745: testing @$ in %printer implies %locations ... ./actions.at:1745: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:1745: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./actions.at:1657: $PREPARSER ./input --debug stderr: Starting parse Entering state 0 Stack now 0 Reducing stack by rule 1 (line 30): -> $$ = nterm $@1 (: ) Entering state 2 Stack now 0 2 Reducing stack by rule 2 (line 31): -> $$ = nterm @2 (: 2) Entering state 4 Stack now 0 2 4 Reducing stack by rule 3 (line 32): -> $$ = nterm @3 (: 3) Entering state 5 Stack now 0 2 4 5 Reducing stack by rule 4 (line 33): -> $$ = nterm @4 (: 4) Entering state 6 Stack now 0 2 4 5 6 Reading a token Now at end of input. syntax error Error: popping nterm @4 (: 4) DESTROY 4 Stack now 0 2 4 5 Error: popping nterm @3 (: 3) DESTROY 3 Stack now 0 2 4 Error: popping nterm @2 (: 2) DESTROY 2 Stack now 0 2 Error: popping nterm $@1 (: ) Stack now 0 Cleanup: discarding lookahead token "end of file" (: ) Stack now 0 ./actions.at:1657: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 350. actions.at:1596: ok stderr: stdout: 352. actions.at:1744: ok 354. actions.at:1856: testing Qualified $$ in actions: yacc.c ... ./actions.at:1856: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 355. actions.at:1856: testing Qualified $$ in actions: glr.c ... ./actions.at:1856: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:1856: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./actions.at:1856: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: 353. actions.at:1745: ok 356. actions.at:1856: testing Qualified $$ in actions: lalr1.cc ... ./actions.at:1856: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./actions.at:1856: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./actions.at:1856: $PREPARSER ./input --debug stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token UNTYPED (ival: 10, fval: 0.1) Shifting token UNTYPED (ival: 10, fval: 0.1) Entering state 1 Stack now 0 1 Reading a token Next token is token INT (ival: 20, fval: 0.2) Shifting token INT (ival: 20, fval: 0.2) Entering state 3 Stack now 0 1 3 Reducing stack by rule 1 (line 53): $1 = token UNTYPED (ival: 10, fval: 0.1) $2 = token INT (ival: 20, fval: 0.2) -> $$ = nterm float (ival: 30, fval: 0.3) Entering state 2 Stack now 0 2 Reading a token Now at end of input. Shifting token "end of file" () Entering state 4 Stack now 0 2 4 Stack now 0 2 4 Cleanup: popping token "end of file" () Cleanup: popping nterm float (ival: 30, fval: 0.3) ./actions.at:1856: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token UNTYPED (ival: 10, fval: 0.1) Shifting token UNTYPED (ival: 10, fval: 0.1) Entering state 1 Stack now 0 1 Reading a token Next token is token INT (ival: 20, fval: 0.2) Shifting token INT (ival: 20, fval: 0.2) Entering state 3 Stack now 0 1 3 Reducing stack by rule 1 (line 53): $1 = token UNTYPED (ival: 10, fval: 0.1) $2 = token INT (ival: 20, fval: 0.2) -> $$ = nterm float (ival: 30, fval: 0.3) Entering state 2 Stack now 0 2 Reading a token Now at end of input. Shifting token "end of file" () Entering state 4 Stack now 0 2 4 Stack now 0 2 4 Cleanup: popping token "end of file" () Cleanup: popping nterm float (ival: 30, fval: 0.3) ./actions.at:1856: sed -ne '/ival:/p' stderr 354. actions.at:1856: ok 357. actions.at:1856: testing Qualified $$ in actions: glr.cc ... ./actions.at:1856: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./actions.at:1856: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./actions.at:1856: $PREPARSER ./input --debug stderr: Starting parse Entering state 0 Reading a token Next token is token UNTYPED (ival: 10, fval: 0.1) Shifting token UNTYPED (ival: 10, fval: 0.1) Entering state 1 Reading a token Next token is token INT (ival: 20, fval: 0.2) Shifting token INT (ival: 20, fval: 0.2) Entering state 3 Reducing stack 0 by rule 1 (line 53): $1 = token UNTYPED (ival: 10, fval: 0.1) $2 = token INT (ival: 20, fval: 0.2) -> $$ = nterm float (ival: 30, fval: 0.3) Entering state 2 Reading a token Now at end of input. Shifting token "end of file" () Entering state 4 Cleanup: popping token "end of file" () Cleanup: popping nterm float (ival: 30, fval: 0.3) ./actions.at:1856: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token UNTYPED (ival: 10, fval: 0.1) Shifting token UNTYPED (ival: 10, fval: 0.1) Entering state 1 Reading a token Next token is token INT (ival: 20, fval: 0.2) Shifting token INT (ival: 20, fval: 0.2) Entering state 3 Reducing stack 0 by rule 1 (line 53): $1 = token UNTYPED (ival: 10, fval: 0.1) $2 = token INT (ival: 20, fval: 0.2) -> $$ = nterm float (ival: 30, fval: 0.3) Entering state 2 Reading a token Now at end of input. Shifting token "end of file" () Entering state 4 Cleanup: popping token "end of file" () Cleanup: popping nterm float (ival: 30, fval: 0.3) ./actions.at:1856: sed -ne '/ival:/p' stderr 355. actions.at:1856: ok 358. actions.at:1856: testing Qualified $$ in actions: glr2.cc ... ./actions.at:1856: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./actions.at:1856: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./actions.at:1856: $PREPARSER ./input --debug stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token UNTYPED (ival: 10, fval: 0.1) Shifting token UNTYPED (ival: 10, fval: 0.1) Entering state 1 Stack now 0 1 Reading a token Next token is token INT (ival: 20, fval: 0.2) Shifting token INT (ival: 20, fval: 0.2) Entering state 3 Stack now 0 1 3 Reducing stack by rule 1 (line 55): $1 = token UNTYPED (ival: 10, fval: 0.1) $2 = token INT (ival: 20, fval: 0.2) -> $$ = nterm float (ival: 30, fval: 0.3) Entering state 2 Stack now 0 2 Reading a token Next token is token "end of file" () Shifting token "end of file" () Entering state 4 Stack now 0 2 4 Stack now 0 2 4 Cleanup: popping token "end of file" () Cleanup: popping nterm float (ival: 30, fval: 0.3) ./actions.at:1856: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token UNTYPED (ival: 10, fval: 0.1) Shifting token UNTYPED (ival: 10, fval: 0.1) Entering state 1 Stack now 0 1 Reading a token Next token is token INT (ival: 20, fval: 0.2) Shifting token INT (ival: 20, fval: 0.2) Entering state 3 Stack now 0 1 3 Reducing stack by rule 1 (line 55): $1 = token UNTYPED (ival: 10, fval: 0.1) $2 = token INT (ival: 20, fval: 0.2) -> $$ = nterm float (ival: 30, fval: 0.3) Entering state 2 Stack now 0 2 Reading a token Next token is token "end of file" () Shifting token "end of file" () Entering state 4 Stack now 0 2 4 Stack now 0 2 4 Cleanup: popping token "end of file" () Cleanup: popping nterm float (ival: 30, fval: 0.3) ./actions.at:1856: sed -ne '/ival:/p' stderr 356. actions.at:1856: ok 359. actions.at:1863: testing Destroying lookahead assigned by semantic action ... ./actions.at:1905: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./actions.at:1906: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./actions.at:1856: $PREPARSER ./input --debug stderr: Starting parse Entering state 0 Reading a token Next token is token UNTYPED (ival: 10, fval: 0.1) Shifting token UNTYPED (ival: 10, fval: 0.1) Entering state 1 Reading a token Next token is token INT (ival: 20, fval: 0.2) Shifting token INT (ival: 20, fval: 0.2) Entering state 3 Reducing stack 0 by rule 1 (line 55): $1 = token UNTYPED (ival: 10, fval: 0.1) $2 = token INT (ival: 20, fval: 0.2) -> $$ = nterm float (ival: 30, fval: 0.3) Entering state 2 Reading a token Now at end of input. Shifting token "end of file" () Entering state 4 Cleanup: popping token "end of file" () Cleanup: popping nterm float (ival: 30, fval: 0.3) ./actions.at:1856: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token UNTYPED (ival: 10, fval: 0.1) Shifting token UNTYPED (ival: 10, fval: 0.1) Entering state 1 Reading a token Next token is token INT (ival: 20, fval: 0.2) Shifting token INT (ival: 20, fval: 0.2) Entering state 3 Reducing stack 0 by rule 1 (line 55): $1 = token UNTYPED (ival: 10, fval: 0.1) $2 = token INT (ival: 20, fval: 0.2) -> $$ = nterm float (ival: 30, fval: 0.3) Entering state 2 Reading a token Now at end of input. Shifting token "end of file" () Entering state 4 Cleanup: popping token "end of file" () Cleanup: popping nterm float (ival: 30, fval: 0.3) ./actions.at:1856: sed -ne '/ival:/p' stderr 357. actions.at:1856: ok stderr: stdout: ./actions.at:1907: $PREPARSER ./input stderr: 'b' destructor 'a' destructor ./actions.at:1907: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 359. actions.at:1863: ok 360. actions.at:1918: testing YYBACKUP ... ./actions.at:1953: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 361. types.at:25: testing %union vs. api.value.type ... ./types.at:34: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y ./actions.at:1954: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS 361. types.at:25: ok 362. types.at:44: testing %yacc vs. api.value.type=union ... ./types.at:53: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y 362. types.at:44: ok 363. types.at:139: testing yacc.c api.value.type={double} ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./actions.at:1955: $PREPARSER ./input stderr: ./actions.at:1955: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 360. actions.at:1918: stdout: ok ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 363. types.at:139: ok 364. types.at:139: testing yacc.c api.value.type={double} %header ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y 365. types.at:139: testing yacc.c api.value.type={variant} ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: stderr: stdout: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 364. types.at:139: ok 365. types.at:139: ok 366. types.at:139: testing yacc.c api.value.type={variant} %header ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y 367. types.at:139: testing yacc.c api.value.type={struct foo} ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 367. types.at:139: ok stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: 368. types.at:139: testing yacc.c api.value.type={struct foo} %header ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 366. types.at:139: ok 369. types.at:139: testing yacc.c api.value.type={struct bar} ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 368. types.at:139: ok stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 370. types.at:139: testing yacc.c api.value.type={struct bar} %header ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y 369. types.at:139: ok 371. types.at:139: testing yacc.c api.value.type={union foo} ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: stderr: stdout: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./types.at:139: $PREPARSER ./test stderr: 370. types.at:139: ok ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 371. types.at:139: ok 372. types.at:139: testing yacc.c api.value.type={union foo} %header ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y stderr: In file included from /usr/include/c++/12/vector:70, from input.cc:109: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2106:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at input.cc:1945:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2378:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2106:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1931:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at input.cc:2797:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2106:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1931:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2780:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at input.cc:2768:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./actions.at:1856: $PREPARSER ./input --debug stderr: Starting parse Entering state 0 Reading a token Next token is token UNTYPED (ival: 10, fval: 0.1) Shifting token UNTYPED (ival: 10, fval: 0.1) Entering state 1 Reading a token Next token is token INT (ival: 20, fval: 0.2) Shifting token INT (ival: 20, fval: 0.2) Entering state 3 Reducing stack 0 by rule 1 (line 55): $1 = token UNTYPED (ival: 10, fval: 0.1) $2 = token INT (ival: 20, fval: 0.2) -> $$ = nterm float (ival: 30, fval: 0.3) Entering state 2 Reading a token Now at end of input. Shifting token "end of file" () Entering state 4 Cleanup: popping token "end of file" () Cleanup: popping nterm float (ival: 30, fval: 0.3) ./actions.at:1856: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 373. types.at:139: testing yacc.c %union { float fval; int ival; }; ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y stderr: Starting parse Entering state 0 Reading a token Next token is token UNTYPED (ival: 10, fval: 0.1) Shifting token UNTYPED (ival: 10, fval: 0.1) Entering state 1 Reading a token Next token is token INT (ival: 20, fval: 0.2) Shifting token INT (ival: 20, fval: 0.2) Entering state 3 Reducing stack 0 by rule 1 (line 55): $1 = token UNTYPED (ival: 10, fval: 0.1) $2 = token INT (ival: 20, fval: 0.2) -> $$ = nterm float (ival: 30, fval: 0.3) Entering state 2 Reading a token Now at end of input. Shifting token "end of file" () Entering state 4 Cleanup: popping token "end of file" () Cleanup: popping nterm float (ival: 30, fval: 0.3) ./actions.at:1856: sed -ne '/ival:/p' stderr 358. actions.at:1856: ok ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS 374. types.at:139: testing yacc.c %union { float fval; int ival; }; %header ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: stderr: ./types.at:139: $PREPARSER ./test stdout: stderr: stderr: ./types.at:139: $PREPARSER ./test ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stdout: ./types.at:139: $PREPARSER ./test stderr: stderr: 372. types.at:139: ok ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 374. types.at:139: ok 373. types.at:139: ok 375. types.at:139: testing yacc.c %union foo { float fval; int ival; }; ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y 376. types.at:139: testing yacc.c %union foo { float fval; int ival; }; %header ... 377. types.at:139: testing yacc.c api.value.union.name=foo; %union { float fval; int ival; }; ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 376. types.at:139: ok stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 378. types.at:139: testing yacc.c api.value.union.name=foo; %union { float fval; int ival; }; %header ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y 377. types.at:139: ok stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 375. types.at:139: ok 379. types.at:139: testing yacc.c api.value.type=union ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS 380. types.at:139: testing yacc.c api.value.type=union %header ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 378. types.at:139: ok 381. types.at:139: testing glr.c api.value.type={double} ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 379. types.at:139: ok ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS 382. types.at:139: testing glr.c api.value.type={double} %header ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 380. types.at:139: ok ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS 383. types.at:139: testing glr.c api.value.type={variant} ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 381. types.at:139: ok 384. types.at:139: testing glr.c api.value.type={variant} %header ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 382. types.at:139: ok 385. types.at:139: testing glr.c api.value.type={struct foo} ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 383. types.at:139: ok ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS 386. types.at:139: testing glr.c api.value.type={struct foo} %header ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 384. types.at:139: ok 387. types.at:139: testing glr.c api.value.type={struct bar} ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stdout: ./types.at:139: $PREPARSER ./test 386. types.at:139: ok stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 385. types.at:139: ok 388. types.at:139: testing glr.c api.value.type={struct bar} %header ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y 389. types.at:139: testing glr.c api.value.type={union foo} ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 387. types.at:139: ok 390. types.at:139: testing glr.c api.value.type={union foo} %header ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 388. types.at:139: ok 391. types.at:139: testing glr.c %union { float fval; int ival; }; ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 389. types.at:139: ok 392. types.at:139: testing glr.c %union { float fval; int ival; }; %header ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 390. types.at:139: ok 393. types.at:139: testing glr.c %union foo { float fval; int ival; }; ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 391. types.at:139: ok 394. types.at:139: testing glr.c %union foo { float fval; int ival; }; %header ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 392. types.at:139: ok 395. types.at:139: testing glr.c api.value.union.name=foo; %union { float fval; int ival; }; ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 393. types.at:139: ok 396. types.at:139: testing glr.c api.value.union.name=foo; %union { float fval; int ival; }; %header ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 394. types.at:139: ok 397. types.at:139: testing glr.c api.value.type=union ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 395. types.at:139: ok 398. types.at:139: testing glr.c api.value.type=union %header ... ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.c test.y ./types.at:139: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o test test.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 396. types.at:139: ok 399. types.at:139: testing lalr1.cc api.value.type={double} ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 397. types.at:139: ok 400. types.at:139: testing lalr1.cc api.value.type={double} %header ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 398. types.at:139: ok 401. types.at:139: testing lalr1.cc api.value.type={variant} ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:639:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1065:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:639:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1065:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1316:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:635:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:363:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:635:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:363:19, inlined from 'virtual int yy::parser::parse()' at test.cc:614:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:62: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:644:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1070:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:644:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1070:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1321:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:639:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:1065:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:639:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:1065:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1316:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:635:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:363:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:635:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:363:19, inlined from 'virtual int yy::parser::parse()' at test.cc:614:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:62: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:644:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:1070:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:644:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:1070:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1321:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:639:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:1065:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:639:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:1065:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1316:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:62: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:644:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:1070:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:644:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:1070:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1321:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:635:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:363:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:635:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:363:19, inlined from 'virtual int yy::parser::parse()' at test.cc:614:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:639:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1065:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:639:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1065:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1316:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:635:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:363:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:635:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:363:19, inlined from 'virtual int yy::parser::parse()' at test.cc:614:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test In file included from /usr/include/c++/12/vector:70, from test.cc:62: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:644:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1070:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:644:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1070:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1321:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:639:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1065:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:639:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1065:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1316:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:62: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:644:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1070:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:644:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1070:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1321:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:635:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:363:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:635:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:363:19, inlined from 'virtual int yy::parser::parse()' at test.cc:614:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:62: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:644:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1070:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:644:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1070:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1321:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:639:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1065:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:639:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1065:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1316:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:635:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:363:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:635:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:363:19, inlined from 'virtual int yy::parser::parse()' at test.cc:614:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 399. types.at:139: ok stderr: 402. types.at:139: testing lalr1.cc api.value.type={variant} %header ... stdout: ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 401. types.at:139: ok 403. types.at:139: testing lalr1.cc api.value.type={struct foo} ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 400. types.at:139: ok 404. types.at:139: testing lalr1.cc api.value.type={struct foo} %header ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:58, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:640:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:363:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:640:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:363:19, inlined from 'virtual int yy::parser::parse()' at test.cc:614:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:58, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:640:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:363:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:640:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:363:19, inlined from 'virtual int yy::parser::parse()' at test.cc:614:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:58, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:640:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:363:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:640:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:363:19, inlined from 'virtual int yy::parser::parse()' at test.cc:614:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:58, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:640:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:363:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:640:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:363:19, inlined from 'virtual int yy::parser::parse()' at test.cc:614:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:58, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:640:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:363:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:640:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:363:19, inlined from 'virtual int yy::parser::parse()' at test.cc:614:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:58, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:640:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:363:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:640:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:363:19, inlined from 'virtual int yy::parser::parse()' at test.cc:614:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 403. types.at:139: ok 405. types.at:139: testing lalr1.cc api.value.type={struct bar} ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 402. types.at:139: ok 406. types.at:139: testing lalr1.cc api.value.type={struct bar} %header ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 404. types.at:139: ok 407. types.at:139: testing lalr1.cc api.value.type={union foo} ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 405. types.at:139: ok 408. types.at:139: testing lalr1.cc api.value.type={union foo} %header ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 406. types.at:139: ok stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 409. types.at:139: testing lalr1.cc %union { float fval; int ival; }; ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y 407. types.at:139: ok ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS 410. types.at:139: testing lalr1.cc %union { float fval; int ival; }; %header ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stdout: ./types.at:139: $PREPARSER ./test ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 408. types.at:139: ok 411. types.at:139: testing lalr1.cc api.value.type=union ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 410. types.at:139: ok 412. types.at:139: testing lalr1.cc api.value.type=union %header ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 409. types.at:139: ok 413. types.at:139: testing lalr1.cc api.value.type=variant ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1478:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:1478:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:1478:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1736:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:1478:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:1478:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1736:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1478:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1478:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1478:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 411. types.at:139: ok 414. types.at:139: testing lalr1.cc api.value.type=variant %header ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 412. types.at:139: ok 415. types.at:139: testing lalr1.cc api.value.type=variant ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:430:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: 413. types.at:139: ok ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y 416. types.at:139: testing lalr1.cc api.value.type=variant %header ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:430:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:430:19, inlined from 'virtual int yy::parser::parse()' at test.cc:688:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:1478:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:430:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:430:19, inlined from 'virtual int yy::parser::parse()' at test.cc:688:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:430:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:1478:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:430:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:430:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:430:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:430:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:985:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:981:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 414. types.at:139: ok 417. types.at:139: testing lalr1.cc api.value.type=variant api.token.constructor ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:957:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1479:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:957:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1479:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1738:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 415. types.at:139: ok 418. types.at:139: testing lalr1.cc api.value.type=variant api.token.constructor %header ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:957:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:1479:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:957:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:1479:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1738:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:953:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:308:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:953:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:308:19, inlined from 'virtual int yy::parser::parse()' at test.cc:567:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS 416. types.at:139: ok 419. types.at:139: testing lalr1.cc %code requires { #include } api.value.type=variant ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:957:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:1479:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:957:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:1479:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1738:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:953:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:308:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:953:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:308:19, inlined from 'virtual int yy::parser::parse()' at test.cc:567:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:62: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:990:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1483:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:990:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1483:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1742:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:957:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1479:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:957:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1479:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1738:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++11 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:953:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:308:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:953:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at test.cc:308:19, inlined from 'virtual int yy::parser::parse()' at test.cc:567:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:957:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1479:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:957:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1479:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1738:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:953:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:308:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:953:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:308:19, inlined from 'virtual int yy::parser::parse()' at test.cc:567:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:62: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:990:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1483:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:990:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1483:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1742:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:957:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1479:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:957:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1479:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1738:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:953:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:308:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:953:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:308:19, inlined from 'virtual int yy::parser::parse()' at test.cc:567:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:62: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:990:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1483:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:990:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1483:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1742:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:53, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:953:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:308:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:953:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:308:19, inlined from 'virtual int yy::parser::parse()' at test.cc:567:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 417. types.at:139: ok 420. types.at:139: testing lalr1.cc %code requires { #include } api.value.type=variant %header ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:58, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:986:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:430:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:986:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:430:19, inlined from 'virtual int yy::parser::parse()' at test.cc:689:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 418. types.at:139: ok 421. types.at:139: testing lalr1.cc %code requires { #include } api.value.type=variant api.token.constructor ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: ./check -std=c++11 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 419. types.at:139: ok 422. types.at:139: testing lalr1.cc %code requires { #include } api.value.type=variant api.token.constructor %header ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:62: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:1022:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1562:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:1022:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1562:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1822:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:58, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:986:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:430:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:986:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:430:19, inlined from 'virtual int yy::parser::parse()' at test.cc:689:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stderr: stdout: ./types.at:139: $PREPARSER ./test stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' stderr: ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++11 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:58, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:1018:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:320:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:1018:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:320:19, inlined from 'virtual int yy::parser::parse()' at test.cc:580:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stdout: ======== Testing with C++ standard flags: '' ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++11 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:58, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:986:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:430:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:986:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:430:19, inlined from 'virtual int yy::parser::parse()' at test.cc:689:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:62: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:1022:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1562:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:1022:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1562:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1822:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:58, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:1018:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:320:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:1018:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:320:19, inlined from 'virtual int yy::parser::parse()' at test.cc:580:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:62: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:1022:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1562:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:1022:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1562:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1822:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check stderr: stdout: ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:58, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:1018:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:320:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:1018:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:320:19, inlined from 'virtual int yy::parser::parse()' at test.cc:580:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 420. types.at:139: ok 423. types.at:139: testing lalr1.cc %code requires { #include } api.value.type=variant api.token.constructor ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:62: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:990:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1530:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:990:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1530:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1790:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++11 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 421. types.at:139: ok 424. types.at:139: testing lalr1.cc %code requires { #include } api.value.type=variant api.token.constructor %header ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 422. types.at:139: ok 425. types.at:139: testing glr.cc api.value.type={double} ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:62: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:990:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1530:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:990:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1530:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1790:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:58, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:986:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:320:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:986:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:320:19, inlined from 'virtual int yy::parser::parse()' at test.cc:580:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: ./check -std=c++11 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:62: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:990:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1530:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:990:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1530:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1790:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:58, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:986:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:320:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:986:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:320:19, inlined from 'virtual int yy::parser::parse()' at test.cc:580:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:58, from test.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:986:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:320:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.hh:986:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:320:19, inlined from 'virtual int yy::parser::parse()' at test.cc:580:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 423. types.at:139: ok 426. types.at:139: testing glr.cc api.value.type={double} %header ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 424. types.at:139: ok 427. types.at:139: testing glr.cc api.value.type={variant} ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 425. types.at:139: ok 428. types.at:139: testing glr.cc api.value.type={variant} %header ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 426. types.at:139: ok 429. types.at:139: testing glr.cc api.value.type={struct foo} ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 427. types.at:139: ok 430. types.at:139: testing glr.cc api.value.type={struct foo} %header ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 428. types.at:139: ok ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS 431. types.at:139: testing glr.cc api.value.type={struct bar} ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 429. types.at:139: ok 432. types.at:139: testing glr.cc api.value.type={struct bar} %header ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 430. types.at:139: ok 433. types.at:139: testing glr.cc api.value.type={union foo} ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 431. types.at:139: ok 434. types.at:139: testing glr.cc api.value.type={union foo} %header ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 432. types.at:139: ok 435. types.at:139: testing glr.cc %union { float fval; int ival; }; ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 433. types.at:139: ok 436. types.at:139: testing glr.cc %union { float fval; int ival; }; %header ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 434. types.at:139: ok 437. types.at:139: testing glr.cc api.value.type=union ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 435. types.at:139: ok 438. types.at:139: testing glr.cc api.value.type=union %header ... ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 436. types.at:139: ok 439. types.at:139: testing glr2.cc api.value.type={double} ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 437. types.at:139: ok 440. types.at:139: testing glr2.cc api.value.type={double} %header ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2083:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1922:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2345:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2083:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1908:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2761:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2083:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1908:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2744:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2732:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:51, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2083:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1922:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2345:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2083:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1908:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2761:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2083:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1908:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2744:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2732:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:51, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2083:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1922:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2345:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2083:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1908:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2761:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2083:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1908:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2744:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2732:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 438. types.at:139: ok 441. types.at:139: testing glr2.cc api.value.type={variant} ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:51, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2083:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1922:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2345:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2083:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1908:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2761:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2083:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1908:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2744:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2732:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:91: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1927:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2350:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2766:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2749:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2737:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:51, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:91: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1927:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2350:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2766:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2749:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2737:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:91: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1927:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2350:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2766:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2749:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2737:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 439. types.at:139: ok 442. types.at:139: testing glr2.cc api.value.type={variant} %header ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:91: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1927:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2350:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2766:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2749:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2737:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 440. types.at:139: ok 443. types.at:139: testing glr2.cc api.value.type={struct foo} ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:56, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:91: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1927:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2350:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2766:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2749:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2737:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:56, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:56, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:91: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1927:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2350:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2766:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2749:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2737:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: stdout: ./types.at:139: $PREPARSER ./test ./types.at:139: $PREPARSER ./test stderr: stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 441. types.at:139: ok ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: 444. types.at:139: testing glr2.cc api.value.type={struct foo} %header ... ======== Testing with C++ standard flags: '' stdout: ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:91: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1927:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2350:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2766:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2749:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2737:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:56, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ======== Testing with C++ standard flags: '' stdout: ./types.at:139: $PREPARSER ./test ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:56, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:56, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:91: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1927:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2350:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2766:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2749:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2737:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:56, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 442. types.at:139: ok 445. types.at:139: testing glr2.cc api.value.type={struct bar} ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:56, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:100: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2097:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1936:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2359:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2097:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1922:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2779:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2097:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1922:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2762:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2750:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 443. types.at:139: ok 446. types.at:139: testing glr2.cc api.value.type={struct bar} %header ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:100: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2097:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1936:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2359:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2097:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1922:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2779:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2097:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1922:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2762:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2750:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:65, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2416:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2399:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2387:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:100: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2097:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1936:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2359:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2097:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1922:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2779:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2097:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1922:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2762:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2750:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test 444. types.at:139: ok stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS 447. types.at:139: testing glr2.cc api.value.type={union foo} ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:65, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2416:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2399:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2387:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:91: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1927:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2350:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2766:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2749:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2737:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:100: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2097:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1936:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2359:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2097:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1922:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2779:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2097:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1922:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2762:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2750:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:65, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2416:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2399:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2387:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:91: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1927:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2350:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2766:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2749:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2737:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:65, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2416:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2399:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2387:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: stderr: ./types.at:139: $PREPARSER ./test stdout: ./types.at:139: ./check stderr: ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:91: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1927:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2350:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2766:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2749:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2737:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 445. types.at:139: ok 448. types.at:139: testing glr2.cc api.value.type={union foo} %header ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:91: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1927:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2350:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2766:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2088:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1913:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2749:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2737:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:56, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 446. types.at:139: ok 449. types.at:139: testing glr2.cc %union { float fval; int ival; }; ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:56, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2090:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1929:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2352:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2090:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1915:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2768:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2090:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1915:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2751:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2739:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:56, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 447. types.at:139: ok stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y 450. types.at:139: testing glr2.cc %union { float fval; int ival; }; %header ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2090:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1929:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2352:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2090:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1915:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2768:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2090:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1915:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2751:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2739:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:51, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:56, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stdout: ======== Testing with C++ standard flags: '' ./types.at:139: ./check ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2090:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1929:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2352:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2090:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1915:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2768:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2090:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1915:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2751:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2739:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:51, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2090:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1929:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2352:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2090:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1915:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2768:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2090:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1915:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2751:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2739:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:51, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 448. types.at:139: ok 451. types.at:139: testing glr2.cc api.value.type=union ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:51, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2095:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1934:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2357:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2095:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1920:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2773:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2095:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1920:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2756:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2744:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 449. types.at:139: ok 452. types.at:139: testing glr2.cc api.value.type=union %header ... ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2095:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1934:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2357:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2095:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1920:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2773:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2095:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1920:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2756:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2744:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ======== Testing with C++ standard flags: '' stdout: ./types.at:139: ./check ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:51, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2095:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1934:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2357:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2095:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1920:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2773:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2095:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1920:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2756:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2744:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:51, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 450. types.at:139: ok 453. types.at:377: testing lalr1.cc: Named %union ... ./types.at:377: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS 453. types.at:377: ok 454. types.at:377: testing glr.cc: Named %union ... ./types.at:377: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y 454. types.at:377: ok 455. scanner.at:326: testing Token numbers: yacc.c ... ./scanner.at:326: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./scanner.at:326: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./scanner.at:326: $EGREP -c 'yytranslate\[\]|translate_table\[\]|translate_table =|translate_table_ =' input.c ./scanner.at:326: $PREPARSER ./input stderr: ./scanner.at:326: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 455. scanner.at:326: ok 456. scanner.at:326: testing Token numbers: yacc.c api.token.raw ... ./scanner.at:326: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./scanner.at:326: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./scanner.at:326: $EGREP -c 'yytranslate\[\]|translate_table\[\]|translate_table =|translate_table_ =' input.c ./scanner.at:326: $PREPARSER ./input stderr: ./scanner.at:326: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 456. scanner.at:326: ok 457. scanner.at:326: testing Token numbers: glr.c ... ./scanner.at:326: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./scanner.at:326: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./scanner.at:326: $EGREP -c 'yytranslate\[\]|translate_table\[\]|translate_table =|translate_table_ =' input.c ./scanner.at:326: $PREPARSER ./input stderr: ./scanner.at:326: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 457. scanner.at:326: ok 458. scanner.at:326: testing Token numbers: glr.c api.token.raw ... ./scanner.at:326: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./scanner.at:326: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2095:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1934:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2357:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2095:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1920:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2773:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:2095:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1920:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2756:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2744:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stdout: ./scanner.at:326: $EGREP -c 'yytranslate\[\]|translate_table\[\]|translate_table =|translate_table_ =' input.c ======== Testing with C++ standard flags: '' ./scanner.at:326: $PREPARSER ./input stderr: ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS ./scanner.at:326: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 458. scanner.at:326: ok 459. scanner.at:326: testing Token numbers: lalr1.cc ... ./scanner.at:326: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./scanner.at:326: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:51, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./scanner.at:326: $EGREP -c 'yytranslate\[\]|translate_table\[\]|translate_table =|translate_table_ =' input.cc ./scanner.at:326: $PREPARSER ./input stderr: ./scanner.at:326: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 459. scanner.at:326: ok 460. scanner.at:326: testing Token numbers: lalr1.cc api.token.raw ... ./scanner.at:326: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./scanner.at:326: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./scanner.at:326: $EGREP -c 'yytranslate\[\]|translate_table\[\]|translate_table =|translate_table_ =' input.cc ./scanner.at:326: $PREPARSER ./input stderr: ./scanner.at:326: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 460. scanner.at:326: ok 461. scanner.at:326: testing Token numbers: glr.cc ... ./scanner.at:326: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./scanner.at:326: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y stderr: In file included from /usr/include/c++/12/vector:70, from test.hh:51, from test.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at test.cc:1573:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:1996:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at test.cc:2412:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at test.cc:1734:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at test.cc:1559:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at test.cc:2395:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at test.cc:2383:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./scanner.at:326: $EGREP -c 'yytranslate\[\]|translate_table\[\]|translate_table =|translate_table_ =' input.cc ./scanner.at:326: $PREPARSER ./input stderr: ./scanner.at:326: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 461. scanner.at:326: ok 462. scanner.at:326: testing Token numbers: glr.cc api.token.raw ... ./scanner.at:326: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./scanner.at:326: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./scanner.at:326: $EGREP -c 'yytranslate\[\]|translate_table\[\]|translate_table =|translate_table_ =' input.cc ./scanner.at:326: $PREPARSER ./input stderr: ./scanner.at:326: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 462. scanner.at:326: ok 463. scanner.at:326: testing Token numbers: glr2.cc ... ./scanner.at:326: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./scanner.at:326: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 451. types.at:139: ok 464. scanner.at:326: testing Token numbers: glr2.cc api.token.raw ... ./scanner.at:326: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y stderr: stdout: ./types.at:139: ./check ./types.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o test.cc test.y ./scanner.at:326: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS ./types.at:139: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2109:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at input.cc:1948:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2371:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2109:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1934:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at input.cc:2823:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2109:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1934:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2806:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at input.cc:2794:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./scanner.at:326: $EGREP -c 'yytranslate\[\]|translate_table\[\]|translate_table =|translate_table_ =' input.cc ./scanner.at:326: $PREPARSER ./input stderr: ./scanner.at:326: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 463. scanner.at:326: ok 465. scanner.at:326: testing Token numbers: lalr1.d ... ./scanner.at:326: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.d input.y 465. scanner.at:326: skipped (scanner.at:326) 466. scanner.at:326: testing Token numbers: lalr1.d api.token.raw ... ./scanner.at:326: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.d input.y 466. scanner.at:326: skipped (scanner.at:326) 467. scanner.at:326: testing Token numbers: lalr1.java ... ./scanner.at:326: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.java input.y 467. scanner.at:326: skipped (scanner.at:326) 468. scanner.at:326: testing Token numbers: lalr1.java api.token.raw ... ./scanner.at:326: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.java input.y 468. scanner.at:326: skipped (scanner.at:326) 469. scanner.at:330: testing Token numbers: lalr1.cc api.token.raw api.value.type=variant api.token.constructor ... ./scanner.at:330: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./scanner.at:330: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2109:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at input.cc:1948:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2371:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2109:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1934:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at input.cc:2823:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2109:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1934:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2806:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at input.cc:2794:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./scanner.at:326: $EGREP -c 'yytranslate\[\]|translate_table\[\]|translate_table =|translate_table_ =' input.cc ./scanner.at:326: $PREPARSER ./input stderr: ./scanner.at:326: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 464. scanner.at:326: ok 470. calc.at:1334: testing Calculator parse.trace ... ./calc.at:1334: mv calc.y.tmp calc.y ./calc.at:1334: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y stderr: In file included from /usr/include/c++/12/vector:70, from input.cc:57: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1075:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1555:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1075:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1555:19, inlined from 'virtual int yy::parser::parse()' at input.cc:1847:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./scanner.at:330: $EGREP -c 'yytranslate\[\]|translate_table\[\]|translate_table =|translate_table_ =' input.cc ./scanner.at:330: $PREPARSER ./input stderr: ./calc.at:1334: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS ./scanner.at:330: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 469. scanner.at:330: ok 471. calc.at:1336: testing Calculator %header ... ./calc.at:1336: mv calc.y.tmp calc.y ./calc.at:1336: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1336: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS stderr: stdout: ./types.at:139: $PREPARSER ./test stderr: ./types.at:139: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 452. types.at:139: ok 472. calc.at:1337: testing Calculator %debug %locations ... ./calc.at:1337: mv calc.y.tmp calc.y ./calc.at:1337: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y stderr: stdout: ./calc.at:1336: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1336: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h stderr: stdout: ./calc.at:1334: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1334: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1336: $PREPARSER ./calc input stderr: ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1334: $PREPARSER ./calc input stderr: ./calc.at:1336: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1336: $PREPARSER ./calc input stderr: stderr: syntax error Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 7) Shifting token "number" (1.1: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 7) -> $$ = nterm exp (1.1: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 7) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 7) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 7) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: -3) -> $$ = nterm exp (1.1: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: -6) -> $$ = nterm exp (1.1: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (1.1: 5) Shifting token "number" (1.1: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 5) -> $$ = nterm exp (1.1: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 5) -> $$ = nterm exp (1.1: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: -5) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -5) -> $$ = nterm exp (1.1: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: -5) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 1) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: -1) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: -1) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: -1) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: -1) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 1) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 1) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: -1) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: -1) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: 1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: -1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (1.1: 4) Shifting token "number" (1.1: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 4) -> $$ = nterm exp (1.1: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 4) -> $$ = nterm exp (1.1: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: -4) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -4) -> $$ = nterm exp (1.1: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: -4) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: 2) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: -1) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: 1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 2) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 2) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 2) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 8) -> $$ = nterm exp (1.1: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (1.1: 256) Shifting token "number" (1.1: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 256) -> $$ = nterm exp (1.1: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 256) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 256) -> $$ = nterm exp (1.1: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 256) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 2) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: 4) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 4) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (1.1: 64) Shifting token "number" (1.1: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 64) -> $$ = nterm exp (1.1: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 64) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 64) -> $$ = nterm exp (1.1: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 64) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: syntax error Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 7) Shifting token "number" (1.1: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 7) -> $$ = nterm exp (1.1: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 7) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 7) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 7) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: -3) -> $$ = nterm exp (1.1: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: -6) -> $$ = nterm exp (1.1: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (1.1: 5) Shifting token "number" (1.1: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 5) -> $$ = nterm exp (1.1: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 5) -> $$ = nterm exp (1.1: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: -5) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -5) -> $$ = nterm exp (1.1: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: -5) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 1) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: -1) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: -1) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: -1) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: -1) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 1) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 1) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: -1) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: -1) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: 1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: -1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (1.1: 4) Shifting token "number" (1.1: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 4) -> $$ = nterm exp (1.1: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 4) -> $$ = nterm exp (1.1: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: -4) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -4) -> $$ = nterm exp (1.1: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: -4) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: 2) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: -1) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: 1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 2) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 2) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 2) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 8) -> $$ = nterm exp (1.1: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (1.1: 256) Shifting token "number" (1.1: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 256) -> $$ = nterm exp (1.1: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 256) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 256) -> $$ = nterm exp (1.1: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 256) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 2) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: 4) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 4) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (1.1: 64) Shifting token "number" (1.1: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 64) -> $$ = nterm exp (1.1: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 64) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 64) -> $$ = nterm exp (1.1: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 64) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1334: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1337: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS input: | 1 2 ./calc.at:1334: $PREPARSER ./calc input ./calc.at:1336: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token "number" (1.1: 2) syntax error Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token "number" (1.1: 2) Stack now 0 ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token "number" (1.1: 2) syntax error Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token "number" (1.1: 2) Stack now 0 ./calc.at:1336: cat stderr input: | 1//2 ./calc.at:1336: $PREPARSER ./calc input ./calc.at:1334: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1334: cat stderr input: ./calc.at:1336: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | 1//2 ./calc.at:1334: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.1: ) Shifting token '/' (1.1: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.1: ) syntax error Error: popping token '/' (1.1: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.1: ) Stack now 0 ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.1: ) Shifting token '/' (1.1: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.1: ) syntax error Error: popping token '/' (1.1: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.1: ) Stack now 0 ./calc.at:1336: cat stderr input: | error ./calc.at:1336: $PREPARSER ./calc input ./calc.at:1334: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1334: cat stderr input: | error ./calc.at:1334: $PREPARSER ./calc input stderr: ./calc.at:1336: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "invalid token" (1.1: ) syntax error Cleanup: discarding lookahead token "invalid token" (1.1: ) Stack now 0 ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "invalid token" (1.1: ) syntax error Cleanup: discarding lookahead token "invalid token" (1.1: ) Stack now 0 ./calc.at:1336: cat stderr input: | 1 = 2 = 3 ./calc.at:1336: $PREPARSER ./calc input stderr: syntax error ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1334: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error ./calc.at:1334: cat stderr input: | 1 = 2 = 3 ./calc.at:1334: $PREPARSER ./calc input ./calc.at:1336: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.1: ) syntax error Error: popping nterm exp (1.1: 2) Stack now 0 8 19 Error: popping token '=' (1.1: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.1: ) Stack now 0 ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.1: ) syntax error Error: popping nterm exp (1.1: 2) Stack now 0 8 19 Error: popping token '=' (1.1: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.1: ) Stack now 0 ./calc.at:1336: cat stderr input: | | +1 ./calc.at:1336: $PREPARSER ./calc input stderr: syntax error ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1334: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error ./calc.at:1334: cat stderr input: | | +1 ./calc.at:1334: $PREPARSER ./calc input ./calc.at:1336: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (1.1: ) syntax error Error: popping nterm input (1.1: ) Stack now 0 Cleanup: discarding lookahead token '+' (1.1: ) Stack now 0 ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1336: cat stderr stderr: ./calc.at:1336: $PREPARSER ./calc /dev/null Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (1.1: ) syntax error Error: popping nterm input (1.1: ) Stack now 0 Cleanup: discarding lookahead token '+' (1.1: ) Stack now 0 stderr: syntax error ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1334: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1336: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1334: cat stderr ./calc.at:1334: $PREPARSER ./calc /dev/null ./calc.at:1336: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. syntax error Cleanup: discarding lookahead token "end of input" (1.1: ) Stack now 0 ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1336: $PREPARSER ./calc input stderr: stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. syntax error Cleanup: discarding lookahead token "end of input" (1.1: ) Stack now 0 stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1334: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1336: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1334: cat stderr input: ./calc.at:1336: cat stderr | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1334: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 2) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.1: ) syntax error Error: popping token '+' (1.1: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.1: 3) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 2222) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 1) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.1: ) syntax error Error: popping token '*' (1.1: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.1: 2) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 3333) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 4444) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) error: 4444 != 1 -> $$ = nterm exp (1.1: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 4444) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (!!) + (1 2) = 1 ./calc.at:1336: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 2) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.1: ) syntax error Error: popping token '+' (1.1: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.1: 3) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 2222) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 1) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.1: ) syntax error Error: popping token '*' (1.1: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.1: 2) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 3333) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 4444) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) error: 4444 != 1 -> $$ = nterm exp (1.1: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 4444) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) stderr: syntax error error: 2222 != 1 ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error error: 2222 != 1 ./calc.at:1334: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1336: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1334: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1334: $PREPARSER ./calc input ./calc.at:1336: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 121): $1 = token '!' (1.1: ) $2 = token '!' (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.1: 2) syntax error Error: popping nterm exp (1.1: 1) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.1: 2) Error: discarding token "number" (1.1: 2) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 2222) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) error: 2222 != 1 -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2222) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (- *) + (1 2) = 1 ./calc.at:1336: $PREPARSER ./calc input stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error error: 2222 != 1 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 121): $1 = token '!' (1.1: ) $2 = token '!' (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.1: 2) syntax error Error: popping nterm exp (1.1: 1) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.1: 2) Error: discarding token "number" (1.1: 2) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 2222) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) error: 2222 != 1 -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2222) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1336: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1334: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1336: cat stderr ./calc.at:1334: cat stderr input: | (* *) + (*) + (*) ./calc.at:1336: $PREPARSER ./calc input input: | (- *) + (1 2) = 1 ./calc.at:1334: $PREPARSER ./calc input stderr: syntax error syntax error syntax error ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 120): $1 = token '-' (1.1: ) $2 = token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.1: 2) syntax error Error: popping nterm exp (1.1: 1) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.1: 2) Error: discarding token "number" (1.1: 2) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 2222) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) error: 2222 != 1 -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2222) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 120): $1 = token '-' (1.1: ) $2 = token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.1: 2) syntax error Error: popping nterm exp (1.1: 1) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.1: 2) Error: discarding token "number" (1.1: 2) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 2222) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) error: 2222 != 1 -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2222) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) stderr: syntax error syntax error syntax error ./calc.at:1334: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1336: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1334: cat stderr ./calc.at:1336: cat stderr input: | (* *) + (*) + (*) ./calc.at:1334: $PREPARSER ./calc input input: | 1 + 2 * 3 + !+ ++ stderr: ./calc.at:1336: $PREPARSER ./calc input Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 2222) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 3333) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) stderr: ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1336: $EGREP -c -v 'Return for a new token:|LAC:' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 2222) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 3333) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) input: | 1 + 2 * 3 + !- ++ ./calc.at:1336: $PREPARSER ./calc input stderr: ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1334: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1336: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1334: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1334: $PREPARSER ./calc input ./calc.at:1336: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 122): $1 = token '!' (1.1: ) $2 = token '+' (1.1: ) Stack now 0 8 21 Cleanup: popping token '+' (1.1: ) Cleanup: popping nterm exp (1.1: 7) ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: | 1 + 2 * 3 + !* ++ ./calc.at:1336: $PREPARSER ./calc input Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 122): $1 = token '!' (1.1: ) $2 = token '+' (1.1: ) Stack now 0 8 21 Cleanup: popping token '+' (1.1: ) Cleanup: popping nterm exp (1.1: 7) ./calc.at:1334: $EGREP -c -v 'Return for a new token:|LAC:' stderr stderr: memory exhausted ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 + 2 * 3 + !- ++ stderr: memory exhausted ./calc.at:1334: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 123): $1 = token '!' (1.1: ) $2 = token '-' (1.1: ) Stack now 0 8 21 Cleanup: popping token '+' (1.1: ) Cleanup: popping nterm exp (1.1: 7) ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 123): $1 = token '!' (1.1: ) $2 = token '-' (1.1: ) Stack now 0 8 21 Cleanup: popping token '+' (1.1: ) Cleanup: popping nterm exp (1.1: 7) ./calc.at:1336: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1334: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1336: cat stderr input: ./calc.at:1334: cat stderr | (#) + (#) = 2222 ./calc.at:1336: $PREPARSER ./calc input stderr: input: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | 1 + 2 * 3 + !* ++ ./calc.at:1334: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 124): $1 = token '!' (1.1: ) $2 = token '*' (1.1: ) memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.1: ) Cleanup: popping nterm exp (1.1: 7) ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 124): $1 = token '!' (1.1: ) $2 = token '*' (1.1: ) memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.1: ) Cleanup: popping nterm exp (1.1: 7) ./calc.at:1336: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1334: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1336: cat stderr ./calc.at:1334: cat stderr input: | (1 + #) = 1111 ./calc.at:1336: $PREPARSER ./calc input input: | (#) + (#) = 2222 ./calc.at:1334: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token syntax error: invalid character: '#' Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token syntax error: invalid character: '#' Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 2222) Shifting token "number" (1.1: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2222) -> $$ = nterm exp (1.1: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 2222) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 2222) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2222) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token syntax error: invalid character: '#' Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token syntax error: invalid character: '#' Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 2222) Shifting token "number" (1.1: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2222) -> $$ = nterm exp (1.1: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 2222) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 2222) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2222) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1336: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1334: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1334: cat stderr ./calc.at:1336: cat stderr input: | (# + 1) = 1111 ./calc.at:1336: $PREPARSER ./calc input input: | (1 + #) = 1111 ./calc.at:1334: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 4 12 21 Reading a token syntax error: invalid character: '#' Error: popping token '+' (1.1: ) Stack now 0 4 12 Error: popping nterm exp (1.1: 1) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1111) Shifting token "number" (1.1: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 1111) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 1111) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) stderr: ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr syntax error: invalid character: '#' stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 4 12 21 Reading a token syntax error: invalid character: '#' Error: popping token '+' (1.1: ) Stack now 0 4 12 Error: popping nterm exp (1.1: 1) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1111) Shifting token "number" (1.1: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 1111) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 1111) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1336: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1334: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1336: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1336: $PREPARSER ./calc input ./calc.at:1334: cat stderr stderr: syntax error: invalid character: '#' ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: | (# + 1) = 1111 ./calc.at:1334: $PREPARSER ./calc input syntax error: invalid character: '#' stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token syntax error: invalid character: '#' Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.1: ) Error: discarding token '+' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.1: 1) Error: discarding token "number" (1.1: 1) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1111) Shifting token "number" (1.1: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 1111) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 1111) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token syntax error: invalid character: '#' Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.1: ) Error: discarding token '+' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.1: 1) Error: discarding token "number" (1.1: 1) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1111) Shifting token "number" (1.1: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 1111) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 1111) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1336: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1336: cat stderr ./calc.at:1334: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (1 + 1) / (1 - 1) ./calc.at:1336: $PREPARSER ./calc input stderr: error: null divisor ./calc.at:1336: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1334: cat stderr stderr: error: null divisor input: | (1 + # + 1) = 1111 ./calc.at:1334: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 4 12 21 Reading a token syntax error: invalid character: '#' Error: popping token '+' (1.1: ) Stack now 0 4 12 Error: popping nterm exp (1.1: 1) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.1: ) Error: discarding token '+' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.1: 1) Error: discarding token "number" (1.1: 1) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1111) Shifting token "number" (1.1: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 1111) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 1111) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1336: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 4 12 21 Reading a token syntax error: invalid character: '#' Error: popping token '+' (1.1: ) Stack now 0 4 12 Error: popping nterm exp (1.1: 1) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.1: ) Error: discarding token '+' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.1: 1) Error: discarding token "number" (1.1: 1) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1111) Shifting token "number" (1.1: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 1111) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 1111) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1336: cat stderr 471. calc.at:1336: ok ./calc.at:1334: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1334: cat stderr stderr: stdout: input: ./calc.at:1337: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' | (1 + 1) / (1 - 1) ./calc.at:1334: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: 2) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.1: ) Shifting token '/' (1.1: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: 1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: 0) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 10 (line 101): $1 = nterm exp (1.1: 2) $2 = token '/' (1.1: ) $3 = nterm exp (1.1: 0) error: null divisor -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1334: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1337: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: 2) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.1: ) Shifting token '/' (1.1: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: 1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: 0) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 10 (line 101): $1 = nterm exp (1.1: 2) $2 = token '/' (1.1: ) $3 = nterm exp (1.1: 0) error: null divisor -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1337: $PREPARSER ./calc input ./calc.at:1334: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 123): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 107): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 123): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 124): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 123): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 123): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 123): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 125): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 124): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 123): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 123): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 123): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 123): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 106): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 106): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 123): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 106): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 125): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 106): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 124): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 124): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 124): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 125): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 124): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1334: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 123): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 107): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 123): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 124): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 123): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 123): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 123): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 125): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 124): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 123): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 123): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 123): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 123): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 106): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 106): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 123): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 106): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 125): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 106): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 124): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 124): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 124): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 125): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 124): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1337: $EGREP -c -v 'Return for a new token:|LAC:' stderr 473. calc.at:1338: testing Calculator %locations api.location.type={Span} ... ./calc.at:1338: mv calc.y.tmp calc.y 470. calc.at:1334: ok ./calc.at:1338: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y input: | 1 2 ./calc.at:1337: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token "number" (1.3: 2) Stack now 0 ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token "number" (1.3: 2) Stack now 0 ./calc.at:1337: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1337: cat stderr input: | 1//2 ./calc.at:1337: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 474. calc.at:1340: testing Calculator %name-prefix "calc" ... ./calc.at:1340: mv calc.y.tmp calc.y Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1340: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1337: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1337: cat stderr input: | error ./calc.at:1337: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error Cleanup: discarding lookahead token "invalid token" (1.1: ) Stack now 0 ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error Cleanup: discarding lookahead token "invalid token" (1.1: ) Stack now 0 ./calc.at:1337: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1337: cat stderr ./calc.at:1338: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS input: | 1 = 2 = 3 ./calc.at:1337: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error Error: popping nterm exp (1.5: 2) Stack now 0 8 19 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error Error: popping nterm exp (1.5: 2) Stack now 0 8 19 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1337: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1337: cat stderr ./calc.at:1340: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS input: | | +1 ./calc.at:1337: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1337: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1337: cat stderr ./calc.at:1337: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. 1.1: syntax error Cleanup: discarding lookahead token "end of input" (1.1: ) Stack now 0 ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. 1.1: syntax error Cleanup: discarding lookahead token "end of input" (1.1: ) Stack now 0 ./calc.at:1337: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1337: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1337: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error Error: popping token '+' (1.17: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 21 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 21 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 107): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error Error: popping token '*' (1.39: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error Error: popping token '+' (1.17: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 21 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 21 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 107): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error Error: popping token '*' (1.39: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1337: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1337: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1337: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 128): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error Error: popping nterm exp (1.9: 1) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 128): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error Error: popping nterm exp (1.9: 1) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1337: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1337: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1337: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 127): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error Error: popping nterm exp (1.10: 1) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 127): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error Error: popping nterm exp (1.10: 1) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1337: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1337: cat stderr input: | (* *) + (*) + (*) ./calc.at:1337: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 21 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 21 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 21 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 21 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1337: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1337: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1337: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 129): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 129): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1337: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1337: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 130): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 130): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1337: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1337: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1337: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 131): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 131): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) stderr: stdout: ./calc.at:1340: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1337: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1340: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c input: ./calc.at:1337: cat stderr | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1340: $PREPARSER ./calc input stderr: ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (#) + (#) = 2222 ./calc.at:1337: $PREPARSER ./calc input stderr: ./calc.at:1340: $EGREP -c -v 'Return for a new token:|LAC:' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.2: ) Error: discarding token "invalid token" (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 21 4 Reading a token 1.8: syntax error: invalid character: '#' Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "invalid token" (1.8: ) Error: discarding token "invalid token" (1.8: ) Error: popping token error (1.8: ) Stack now 0 8 21 4 Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.2: ) Error: discarding token "invalid token" (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 21 4 Reading a token 1.8: syntax error: invalid character: '#' Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "invalid token" (1.8: ) Error: discarding token "invalid token" (1.8: ) Error: popping token error (1.8: ) Stack now 0 8 21 4 Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | 1 2 ./calc.at:1340: $PREPARSER ./calc input stderr: syntax error ./calc.at:1337: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1337: cat stderr input: | (1 + #) = 1111 ./calc.at:1337: $PREPARSER ./calc input ./calc.at:1340: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.6: ) Error: discarding token "invalid token" (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1340: cat stderr ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.6: ) Error: discarding token "invalid token" (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | 1//2 ./calc.at:1340: $PREPARSER ./calc input stderr: ./calc.at:1337: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 syntax error stderr: ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stdout: ./calc.at:1338: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1338: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c ./calc.at:1337: cat stderr stderr: syntax error input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1338: $PREPARSER ./calc input input: stderr: ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1340: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | (# + 1) = 1111 ./calc.at:1337: $PREPARSER ./calc input stderr: ./calc.at:1338: $EGREP -c -v 'Return for a new token:|LAC:' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.2: ) Error: discarding token "invalid token" (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 2 ./calc.at:1340: cat stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.2: ) Error: discarding token "invalid token" (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1338: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: 1.3: syntax error ./calc.at:1337: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | error ./calc.at:1340: $PREPARSER ./calc input stderr: syntax error ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1338: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error ./calc.at:1337: cat stderr ./calc.at:1338: cat stderr input: ./calc.at:1340: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | (1 + # + 1) = 1111 ./calc.at:1337: $PREPARSER ./calc input input: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.6: ) Error: discarding token "invalid token" (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) | 1//2 ./calc.at:1338: $PREPARSER ./calc input ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.6: ) Error: discarding token "invalid token" (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1340: cat stderr stderr: 1.3: syntax error input: | 1 = 2 = 3 ./calc.at:1340: $PREPARSER ./calc input stderr: ./calc.at:1337: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1338: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 syntax error ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1337: cat stderr syntax error ./calc.at:1338: cat stderr input: | (1 + 1) / (1 - 1) input: ./calc.at:1337: $PREPARSER ./calc input | error ./calc.at:1338: $PREPARSER ./calc input stderr: ./calc.at:1340: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.1: syntax error ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 125): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 106): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 125): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 108): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1337: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 105): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 125): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 106): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 125): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 108): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) 1.1: syntax error ./calc.at:1338: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1340: cat stderr ./calc.at:1337: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1338: cat stderr input: | | +1 input: ./calc.at:1340: $PREPARSER ./calc input | 1 = 2 = 3 ./calc.at:1337: cat stderr ./calc.at:1338: $PREPARSER ./calc input stderr: stderr: syntax error ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 472. calc.at:1337: ok 1.7: syntax error ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error stderr: 1.7: syntax error ./calc.at:1338: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1340: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1338: cat stderr ./calc.at:1340: cat stderr input: ./calc.at:1340: $PREPARSER ./calc /dev/null | | +1 ./calc.at:1338: $PREPARSER ./calc input stderr: syntax error stderr: ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 2.1: syntax error 475. calc.at:1341: testing Calculator %verbose ... ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1341: mv calc.y.tmp calc.y stderr: ./calc.at:1341: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y syntax error stderr: 2.1: syntax error ./calc.at:1338: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1340: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1338: cat stderr ./calc.at:1340: cat stderr ./calc.at:1338: $PREPARSER ./calc /dev/null input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 stderr: ./calc.at:1340: $PREPARSER ./calc input 1.1: syntax error ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1338: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1340: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1338: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1338: $PREPARSER ./calc input ./calc.at:1340: cat stderr ./calc.at:1341: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 input: | (!!) + (1 2) = 1 ./calc.at:1340: $PREPARSER ./calc input stderr: syntax error error: 2222 != 1 ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1338: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error error: 2222 != 1 ./calc.at:1338: cat stderr input: ./calc.at:1340: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | (!!) + (1 2) = 1 ./calc.at:1338: $PREPARSER ./calc input stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1340: cat stderr stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 input: | (- *) + (1 2) = 1 ./calc.at:1340: $PREPARSER ./calc input ./calc.at:1338: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1338: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1338: $PREPARSER ./calc input ./calc.at:1340: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1340: cat stderr stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 input: | (* *) + (*) + (*) ./calc.at:1340: $PREPARSER ./calc input stderr: ./calc.at:1338: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 syntax error syntax error syntax error ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error ./calc.at:1338: cat stderr input: | (* *) + (*) + (*) ./calc.at:1338: $PREPARSER ./calc input stderr: ./calc.at:1340: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1340: cat stderr ./calc.at:1338: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 + 2 * 3 + !+ ++ ./calc.at:1340: $PREPARSER ./calc input stderr: ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1338: cat stderr stderr: ./calc.at:1340: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1338: $PREPARSER ./calc input input: stderr: | 1 + 2 * 3 + !- ++ ./calc.at:1340: $PREPARSER ./calc input ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1338: $EGREP -c -v 'Return for a new token:|LAC:' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1338: $PREPARSER ./calc input stderr: ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1340: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1338: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1340: cat stderr input: ./calc.at:1338: cat stderr | 1 + 2 * 3 + !* ++ ./calc.at:1340: $PREPARSER ./calc input stderr: memory exhausted ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: | 1 + 2 * 3 + !* ++ ./calc.at:1338: $PREPARSER ./calc input memory exhausted stderr: 1.14: memory exhausted ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.14: memory exhausted ./calc.at:1340: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1338: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1340: cat stderr input: | (#) + (#) = 2222 ./calc.at:1340: $PREPARSER ./calc input ./calc.at:1338: cat stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' input: | (#) + (#) = 2222 ./calc.at:1338: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1340: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1338: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1340: cat stderr ./calc.at:1338: cat stderr input: | (1 + #) = 1111 ./calc.at:1340: $PREPARSER ./calc input stderr: input: | (1 + #) = 1111 syntax error: invalid character: '#' ./calc.at:1338: $PREPARSER ./calc input ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr syntax error: invalid character: '#' stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1338: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1340: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1338: cat stderr ./calc.at:1340: cat stderr input: input: | (# + 1) = 1111 ./calc.at:1340: $PREPARSER ./calc input | (# + 1) = 1111 ./calc.at:1338: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr syntax error: invalid character: '#' stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1338: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1340: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1338: cat stderr ./calc.at:1340: cat stderr input: input: | (1 + # + 1) = 1111 ./calc.at:1338: $PREPARSER ./calc input | (1 + # + 1) = 1111 stderr: ./calc.at:1340: $PREPARSER ./calc input 1.6: syntax error: invalid character: '#' ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: 1.6: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1338: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1338: cat stderr ./calc.at:1340: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (1 + 1) / (1 - 1) ./calc.at:1338: $PREPARSER ./calc input ./calc.at:1340: cat stderr stderr: 1.11-17: error: null divisor ./calc.at:1338: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (1 + 1) / (1 - 1) ./calc.at:1340: $PREPARSER ./calc input stderr: stderr: error: null divisor ./calc.at:1340: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 1.11-17: error: null divisor stderr: error: null divisor ./calc.at:1338: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1340: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1338: cat stderr 473. calc.at:1338: ok ./calc.at:1340: cat stderr 474. calc.at:1340: ok stderr: stdout: ./calc.at:1341: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1341: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1341: $PREPARSER ./calc input stderr: ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1341: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1341: $PREPARSER ./calc input 476. calc.at:1342: testing Calculator %yacc ... ./calc.at:1342: if "$POSIXLY_CORRECT_IS_EXPORTED"; then sed -e '/\/\* !POSIX \*\//d' calc.y.tmp >calc.y else mv calc.y.tmp calc.y fi stderr: syntax error ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1342: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y stderr: syntax error 477. calc.at:1343: testing Calculator parse.error=detailed ... ./calc.at:1343: mv calc.y.tmp calc.y ./calc.at:1343: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1341: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1341: cat stderr input: | 1//2 ./calc.at:1341: $PREPARSER ./calc input stderr: syntax error ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1341: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1342: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS ./calc.at:1341: cat stderr input: | error ./calc.at:1341: $PREPARSER ./calc input stderr: syntax error ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1343: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS ./calc.at:1341: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1341: cat stderr input: | 1 = 2 = 3 ./calc.at:1341: $PREPARSER ./calc input stderr: syntax error ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1341: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1341: cat stderr input: | | +1 ./calc.at:1341: $PREPARSER ./calc input stderr: syntax error ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1341: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1341: cat stderr ./calc.at:1341: $PREPARSER ./calc /dev/null stderr: syntax error ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1341: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1341: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1341: $PREPARSER ./calc input stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1341: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1341: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1341: $PREPARSER ./calc input stderr: syntax error error: 2222 != 1 ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error error: 2222 != 1 ./calc.at:1341: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1341: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1341: $PREPARSER ./calc input stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1341: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1341: cat stderr input: | (* *) + (*) + (*) ./calc.at:1341: $PREPARSER ./calc input stderr: syntax error syntax error syntax error ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error ./calc.at:1341: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1341: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1341: $PREPARSER ./calc input stderr: ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1341: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1341: $PREPARSER ./calc input stderr: ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1341: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: stdout: ./calc.at:1342: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1341: cat stderr ./calc.at:1342: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c input: input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 | 1 + 2 * 3 + !* ++ ./calc.at:1342: $PREPARSER ./calc input ./calc.at:1341: $PREPARSER ./calc input stderr: stderr: memory exhausted ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: ./calc.at:1342: $EGREP -c -v 'Return for a new token:|LAC:' stderr memory exhausted input: | 1 2 ./calc.at:1342: $PREPARSER ./calc input stderr: syntax error ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1341: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1341: cat stderr ./calc.at:1342: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (#) + (#) = 2222 ./calc.at:1341: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1342: cat stderr input: | 1//2 ./calc.at:1342: $PREPARSER ./calc input stderr: syntax error ./calc.at:1341: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1341: cat stderr input: | (1 + #) = 1111 ./calc.at:1341: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1342: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error: invalid character: '#' ./calc.at:1342: cat stderr ./calc.at:1341: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | error ./calc.at:1342: $PREPARSER ./calc input stderr: syntax error ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1341: cat stderr ./calc.at:1342: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (# + 1) = 1111 ./calc.at:1341: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1342: cat stderr stderr: input: syntax error: invalid character: '#' | 1 = 2 = 3 ./calc.at:1342: $PREPARSER ./calc input stderr: syntax error ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1341: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1341: cat stderr ./calc.at:1342: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (1 + # + 1) = 1111 ./calc.at:1341: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1342: cat stderr stderr: syntax error: invalid character: '#' input: | | +1 ./calc.at:1342: $PREPARSER ./calc input stderr: syntax error ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1341: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1341: cat stderr ./calc.at:1342: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (1 + 1) / (1 - 1) ./calc.at:1341: $PREPARSER ./calc input stderr: stderr: error: null divisor ./calc.at:1341: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stdout: ./calc.at:1343: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1342: cat stderr stderr: error: null divisor ./calc.at:1343: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c ./calc.at:1342: $PREPARSER ./calc /dev/null stderr: syntax error ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 stderr: ./calc.at:1343: $PREPARSER ./calc input syntax error stderr: ./calc.at:1341: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1343: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1341: cat stderr ./calc.at:1342: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 2 ./calc.at:1343: $PREPARSER ./calc input 475. calc.at:1341: ok stderr: syntax error, unexpected number ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1342: cat stderr syntax error, unexpected number input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1342: $PREPARSER ./calc input stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1343: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1343: cat stderr input: | 1//2 ./calc.at:1343: $PREPARSER ./calc input stderr: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1342: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1342: cat stderr ./calc.at:1343: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (!!) + (1 2) = 1 ./calc.at:1342: $PREPARSER ./calc input stderr: 478. calc.at:1344: testing Calculator parse.error=verbose ... ./calc.at:1344: mv calc.y.tmp calc.y syntax error error: 2222 != 1 ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1343: cat stderr ./calc.at:1344: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y stderr: syntax error error: 2222 != 1 input: | error ./calc.at:1343: $PREPARSER ./calc input stderr: syntax error, unexpected invalid token ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1342: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error, unexpected invalid token ./calc.at:1342: cat stderr ./calc.at:1343: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (- *) + (1 2) = 1 ./calc.at:1342: $PREPARSER ./calc input ./calc.at:1343: cat stderr stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 = 2 = 3 ./calc.at:1343: $PREPARSER ./calc input stderr: stderr: syntax error, unexpected '=' ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr syntax error syntax error error: 2222 != 1 stderr: syntax error, unexpected '=' ./calc.at:1342: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1343: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1343: cat stderr ./calc.at:1342: cat stderr input: | | +1 input: ./calc.at:1343: $PREPARSER ./calc input | (* *) + (*) + (*) stderr: ./calc.at:1342: $PREPARSER ./calc input syntax error, unexpected '+' ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '+' ./calc.at:1344: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS stderr: syntax error syntax error syntax error ./calc.at:1343: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1342: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1343: cat stderr ./calc.at:1343: $PREPARSER ./calc /dev/null stderr: ./calc.at:1342: cat stderr syntax error, unexpected end of file ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected end of file input: | 1 + 2 * 3 + !+ ++ ./calc.at:1342: $PREPARSER ./calc input stderr: ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1343: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1342: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1343: cat stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1342: $PREPARSER ./calc input input: stderr: ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1343: $PREPARSER ./calc input stderr: stderr: syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' error: 4444 != 1 ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' error: 4444 != 1 ./calc.at:1342: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1343: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1342: cat stderr ./calc.at:1343: cat stderr input: input: | (!!) + (1 2) = 1 ./calc.at:1343: $PREPARSER ./calc input | 1 + 2 * 3 + !* ++ ./calc.at:1342: $PREPARSER ./calc input stderr: syntax error, unexpected number error: 2222 != 1 ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: memory exhausted ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected number error: 2222 != 1 stderr: memory exhausted ./calc.at:1343: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1342: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1343: cat stderr ./calc.at:1342: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1343: $PREPARSER ./calc input stderr: input: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected number error: 2222 != 1 | (#) + (#) = 2222 ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1342: $PREPARSER ./calc input stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected number error: 2222 != 1 stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1343: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1343: cat stderr ./calc.at:1342: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (* *) + (*) + (*) ./calc.at:1343: $PREPARSER ./calc input stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1342: cat stderr syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' input: | (1 + #) = 1111 ./calc.at:1342: $PREPARSER ./calc input stderr: ./calc.at:1343: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 syntax error: invalid character: '#' ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1343: cat stderr stderr: syntax error: invalid character: '#' input: | 1 + 2 * 3 + !+ ++ ./calc.at:1343: $PREPARSER ./calc input stderr: ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1342: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: ./calc.at:1343: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1342: cat stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1343: $PREPARSER ./calc input input: | (# + 1) = 1111 stderr: ./calc.at:1342: $PREPARSER ./calc input ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: syntax error: invalid character: '#' ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1343: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1342: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1343: cat stderr ./calc.at:1342: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1343: $PREPARSER ./calc input input: stderr: | (1 + # + 1) = 1111 ./calc.at:1342: $PREPARSER ./calc input memory exhausted ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: memory exhausted syntax error: invalid character: '#' ./calc.at:1342: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1343: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1343: cat stderr ./calc.at:1342: cat stderr input: | (#) + (#) = 2222 ./calc.at:1343: $PREPARSER ./calc input stderr: input: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | (1 + 1) / (1 - 1) ./calc.at:1342: $PREPARSER ./calc input stderr: stderr: error: null divisor syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1342: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: error: null divisor ./calc.at:1343: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1343: cat stderr ./calc.at:1342: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (1 + #) = 1111 ./calc.at:1343: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1342: cat stderr stderr: syntax error: invalid character: '#' 476. calc.at:1342: ok ./calc.at:1343: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1343: cat stderr input: | (# + 1) = 1111 ./calc.at:1343: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1343: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 479. calc.at:1346: testing Calculator api.pure=full %locations ... ./calc.at:1346: mv calc.y.tmp calc.y ./calc.at:1343: cat stderr ./calc.at:1346: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y input: | (1 + # + 1) = 1111 ./calc.at:1343: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1343: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1343: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1343: $PREPARSER ./calc input stderr: error: null divisor ./calc.at:1343: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: error: null divisor ./calc.at:1343: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1343: cat stderr 477. calc.at:1343: ok ./calc.at:1346: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS stderr: stdout: ./calc.at:1344: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' 480. calc.at:1347: testing Calculator api.push-pull=both api.pure=full %locations ... ./calc.at:1347: mv calc.y.tmp calc.y ./calc.at:1344: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c ./calc.at:1347: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1344: $PREPARSER ./calc input stderr: ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1344: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1344: $PREPARSER ./calc input stderr: syntax error, unexpected number ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected number ./calc.at:1344: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1344: cat stderr input: | 1//2 ./calc.at:1344: $PREPARSER ./calc input stderr: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1347: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS ./calc.at:1344: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1344: cat stderr input: | error ./calc.at:1344: $PREPARSER ./calc input stderr: syntax error, unexpected invalid token ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected invalid token ./calc.at:1344: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1344: cat stderr input: | 1 = 2 = 3 ./calc.at:1344: $PREPARSER ./calc input stderr: syntax error, unexpected '=' ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '=' ./calc.at:1344: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1344: cat stderr input: | | +1 ./calc.at:1344: $PREPARSER ./calc input stderr: syntax error, unexpected '+' ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '+' ./calc.at:1344: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1344: cat stderr ./calc.at:1344: $PREPARSER ./calc /dev/null stderr: syntax error, unexpected end of input ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected end of input ./calc.at:1344: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1344: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1344: $PREPARSER ./calc input stderr: syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' error: 4444 != 1 ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' error: 4444 != 1 ./calc.at:1344: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1344: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1344: $PREPARSER ./calc input stderr: syntax error, unexpected number error: 2222 != 1 ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected number error: 2222 != 1 ./calc.at:1344: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1344: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1344: $PREPARSER ./calc input stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected number error: 2222 != 1 ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected number error: 2222 != 1 ./calc.at:1344: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1344: cat stderr input: | (* *) + (*) + (*) ./calc.at:1344: $PREPARSER ./calc input stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1344: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1344: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1344: $PREPARSER ./calc input stderr: ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1344: $EGREP -c -v 'Return for a new token:|LAC:' stderr stderr: stdout: ./calc.at:1346: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' input: ./calc.at:1346: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c | 1 + 2 * 3 + !- ++ ./calc.at:1344: $PREPARSER ./calc input stderr: ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1346: $PREPARSER ./calc input stderr: ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1346: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1344: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 2 ./calc.at:1346: $PREPARSER ./calc input stderr: ./calc.at:1344: cat stderr 1.3: syntax error ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1344: $PREPARSER ./calc input stderr: memory exhausted ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error stderr: memory exhausted ./calc.at:1346: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1344: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1344: cat stderr ./calc.at:1346: cat stderr input: | (#) + (#) = 2222 ./calc.at:1344: $PREPARSER ./calc input input: | 1//2 stderr: ./calc.at:1346: $PREPARSER ./calc input syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' stderr: 1.3: syntax error ./calc.at:1346: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1344: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1346: cat stderr ./calc.at:1344: cat stderr input: | error ./calc.at:1346: $PREPARSER ./calc input input: stderr: 1.1: syntax error | (1 + #) = 1111 ./calc.at:1344: $PREPARSER ./calc input ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: syntax error: invalid character: '#' ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 1.1: syntax error stderr: syntax error: invalid character: '#' ./calc.at:1346: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1344: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1346: cat stderr ./calc.at:1344: cat stderr input: | 1 = 2 = 3 ./calc.at:1346: $PREPARSER ./calc input input: stderr: | (# + 1) = 1111 1.7: syntax error ./calc.at:1344: $PREPARSER ./calc input ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stdout: stderr: syntax error: invalid character: '#' ./calc.at:1347: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error ./calc.at:1347: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c stderr: syntax error: invalid character: '#' ./calc.at:1346: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1347: $PREPARSER ./calc input stderr: ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1346: cat stderr ./calc.at:1344: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: input: ./calc.at:1347: $EGREP -c -v 'Return for a new token:|LAC:' stderr | | +1 ./calc.at:1346: $PREPARSER ./calc input stderr: 2.1: syntax error input: ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1344: cat stderr | 1 2 ./calc.at:1347: $PREPARSER ./calc input stderr: stderr: 1.3: syntax error ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 2.1: syntax error input: stderr: | (1 + # + 1) = 1111 ./calc.at:1344: $PREPARSER ./calc input 1.3: syntax error stderr: ./calc.at:1346: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 syntax error: invalid character: '#' ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1347: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1346: cat stderr ./calc.at:1346: $PREPARSER ./calc /dev/null stderr: ./calc.at:1344: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 1.1: syntax error ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1347: cat stderr stderr: ./calc.at:1344: cat stderr 1.1: syntax error input: | 1//2 ./calc.at:1347: $PREPARSER ./calc input ./calc.at:1346: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: stderr: | (1 + 1) / (1 - 1) ./calc.at:1344: $PREPARSER ./calc input 1.3: syntax error ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: error: null divisor ./calc.at:1344: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error stderr: error: null divisor ./calc.at:1346: cat stderr ./calc.at:1347: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: ./calc.at:1344: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1346: $PREPARSER ./calc input ./calc.at:1344: cat stderr stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 478. calc.at:1344: ok ./calc.at:1347: cat stderr stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1346: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | error ./calc.at:1347: $PREPARSER ./calc input ./calc.at:1346: cat stderr stderr: 1.1: syntax error ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: 1.1: syntax error | (!!) + (1 2) = 1 ./calc.at:1346: $PREPARSER ./calc input stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1347: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1347: cat stderr ./calc.at:1346: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 = 2 = 3 ./calc.at:1347: $PREPARSER ./calc input 481. calc.at:1348: testing Calculator parse.error=detailed %locations ... ./calc.at:1348: mv calc.y.tmp calc.y stderr: ./calc.at:1346: cat stderr 1.7: syntax error ./calc.at:1348: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: 1.7: syntax error | (- *) + (1 2) = 1 ./calc.at:1346: $PREPARSER ./calc input stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1347: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1346: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1347: cat stderr ./calc.at:1346: cat stderr input: input: | | +1 ./calc.at:1347: $PREPARSER ./calc input | (* *) + (*) + (*) ./calc.at:1346: $PREPARSER ./calc input stderr: stderr: 2.1: syntax error 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error stderr: 2.1: syntax error ./calc.at:1346: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1347: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1346: cat stderr ./calc.at:1347: cat stderr input: ./calc.at:1347: $PREPARSER ./calc /dev/null | 1 + 2 * 3 + !+ ++ ./calc.at:1346: $PREPARSER ./calc input stderr: 1.1: syntax error ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error stderr: ./calc.at:1346: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: ./calc.at:1347: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | 1 + 2 * 3 + !- ++ ./calc.at:1346: $PREPARSER ./calc input stderr: ./calc.at:1348: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1347: cat stderr ./calc.at:1346: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1347: $PREPARSER ./calc input stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1346: cat stderr stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 input: | 1 + 2 * 3 + !* ++ ./calc.at:1346: $PREPARSER ./calc input stderr: 1.14: memory exhausted ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1347: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.14: memory exhausted ./calc.at:1346: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1347: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1347: $PREPARSER ./calc input stderr: ./calc.at:1346: cat stderr 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 | (#) + (#) = 2222 ./calc.at:1346: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1347: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1346: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1347: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1346: cat stderr ./calc.at:1347: $PREPARSER ./calc input stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (1 + #) = 1111 ./calc.at:1346: $PREPARSER ./calc input stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1347: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1346: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1347: cat stderr ./calc.at:1346: cat stderr input: | (* *) + (*) + (*) ./calc.at:1347: $PREPARSER ./calc input input: stderr: | (# + 1) = 1111 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1346: $PREPARSER ./calc input stderr: stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 1.2: syntax error 1.10: syntax error 1.16: syntax error stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1347: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1346: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1347: cat stderr ./calc.at:1346: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1347: $PREPARSER ./calc input input: | (1 + # + 1) = 1111 stderr: ./calc.at:1346: $PREPARSER ./calc input ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' stderr: ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1347: $EGREP -c -v 'Return for a new token:|LAC:' stderr stderr: 1.6: syntax error: invalid character: '#' input: | 1 + 2 * 3 + !- ++ ./calc.at:1347: $PREPARSER ./calc input stderr: ./calc.at:1346: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1346: cat stderr ./calc.at:1347: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (1 + 1) / (1 - 1) ./calc.at:1346: $PREPARSER ./calc input stderr: ./calc.at:1347: cat stderr 1.11-17: error: null divisor ./calc.at:1346: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: 1.11-17: error: null divisor | 1 + 2 * 3 + !* ++ ./calc.at:1347: $PREPARSER ./calc input stderr: 1.14: memory exhausted ./calc.at:1346: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.14: memory exhausted ./calc.at:1346: cat stderr ./calc.at:1347: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 479. calc.at:1346: ok ./calc.at:1347: cat stderr input: | (#) + (#) = 2222 ./calc.at:1347: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1347: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1347: cat stderr input: | (1 + #) = 1111 ./calc.at:1347: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 482. calc.at:1350: testing Calculator parse.error=detailed %locations %header api.prefix={calc} %verbose %yacc ... ./calc.at:1350: if "$POSIXLY_CORRECT_IS_EXPORTED"; then sed -e '/\/\* !POSIX \*\//d' calc.y.tmp >calc.y else mv calc.y.tmp calc.y fi stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1350: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1347: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1347: cat stderr input: | (# + 1) = 1111 ./calc.at:1347: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1347: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1347: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1347: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1347: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1347: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1347: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1347: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1347: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1350: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS ./calc.at:1347: cat stderr 480. calc.at:1347: ok 483. calc.at:1351: testing Calculator parse.error=detailed %locations %header %name-prefix "calc" api.token.prefix={TOK_} %verbose %yacc ... ./calc.at:1351: if "$POSIXLY_CORRECT_IS_EXPORTED"; then sed -e '/\/\* !POSIX \*\//d' calc.y.tmp >calc.y else mv calc.y.tmp calc.y fi ./calc.at:1351: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y stderr: stdout: ./calc.at:1348: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1348: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1348: $PREPARSER ./calc input stderr: ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1348: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1348: $PREPARSER ./calc input stderr: 1.3: syntax error, unexpected number ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error, unexpected number ./calc.at:1348: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1348: cat stderr input: | 1//2 ./calc.at:1348: $PREPARSER ./calc input stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1351: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1348: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1348: cat stderr input: | error ./calc.at:1348: $PREPARSER ./calc input stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1348: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1348: cat stderr input: | 1 = 2 = 3 ./calc.at:1348: $PREPARSER ./calc input stderr: 1.7: syntax error, unexpected '=' ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error, unexpected '=' ./calc.at:1348: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1348: cat stderr input: | | +1 ./calc.at:1348: $PREPARSER ./calc input stderr: 2.1: syntax error, unexpected '+' ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error, unexpected '+' ./calc.at:1348: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1348: cat stderr ./calc.at:1348: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error, unexpected end of file ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error, unexpected end of file ./calc.at:1348: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1348: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1348: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1348: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1348: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1348: $PREPARSER ./calc input stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 ./calc.at:1348: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1348: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1348: $PREPARSER ./calc input stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1348: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1348: cat stderr input: | (* *) + (*) + (*) ./calc.at:1348: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1348: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1348: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1348: $PREPARSER ./calc input stderr: ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1348: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1348: $PREPARSER ./calc input stderr: ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1348: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1348: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1348: $PREPARSER ./calc input stderr: 1.14: memory exhausted ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.14: memory exhausted ./calc.at:1348: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1348: cat stderr input: | (#) + (#) = 2222 ./calc.at:1348: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1348: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1348: cat stderr input: | (1 + #) = 1111 ./calc.at:1348: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1348: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1348: cat stderr input: | (# + 1) = 1111 ./calc.at:1348: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1348: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1348: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1348: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1348: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1348: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1348: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1348: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1348: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: stdout: ./calc.at:1348: cat stderr ./calc.at:1350: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1350: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h 481. calc.at:1348: ok input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1350: $PREPARSER ./calc input stderr: ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1350: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1350: $PREPARSER ./calc input stderr: 1.3: syntax error, unexpected number ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error, unexpected number ./calc.at:1350: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 484. calc.at:1353: testing Calculator %debug ... ./calc.at:1353: mv calc.y.tmp calc.y ./calc.at:1353: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1350: cat stderr input: | 1//2 ./calc.at:1350: $PREPARSER ./calc input stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1350: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1350: cat stderr input: | error ./calc.at:1350: $PREPARSER ./calc input stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1350: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1350: cat stderr input: | 1 = 2 = 3 ./calc.at:1350: $PREPARSER ./calc input stderr: 1.7: syntax error, unexpected '=' ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error, unexpected '=' ./calc.at:1350: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1350: cat stderr ./calc.at:1353: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS input: | | +1 ./calc.at:1350: $PREPARSER ./calc input stderr: 2.1: syntax error, unexpected '+' ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error, unexpected '+' ./calc.at:1350: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1350: cat stderr ./calc.at:1350: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error, unexpected end of file ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error, unexpected end of file ./calc.at:1350: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1350: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1350: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1350: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1350: cat stderr stderr: stdout: ./calc.at:1351: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' input: | (!!) + (1 2) = 1 ./calc.at:1350: $PREPARSER ./calc input stderr: ./calc.at:1351: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1351: $PREPARSER ./calc input ./calc.at:1350: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1350: cat stderr stderr: ./calc.at:1351: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | (- *) + (1 2) = 1 ./calc.at:1350: $PREPARSER ./calc input input: stderr: | 1 2 ./calc.at:1351: $PREPARSER ./calc input 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error, unexpected number ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 stderr: 1.3: syntax error, unexpected number ./calc.at:1350: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1351: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1350: cat stderr ./calc.at:1351: cat stderr input: | (* *) + (*) + (*) ./calc.at:1350: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1//2 ./calc.at:1351: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1350: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1350: cat stderr ./calc.at:1351: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 + 2 * 3 + !+ ++ ./calc.at:1351: cat stderr ./calc.at:1350: $PREPARSER ./calc input stderr: ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | error ./calc.at:1351: $PREPARSER ./calc input stderr: ./calc.at:1350: $EGREP -c -v 'Return for a new token:|LAC:' stderr stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: | 1 + 2 * 3 + !- ++ 1.1: syntax error, unexpected invalid token ./calc.at:1350: $PREPARSER ./calc input stderr: ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1351: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1350: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1351: cat stderr ./calc.at:1350: cat stderr input: | 1 = 2 = 3 input: ./calc.at:1351: $PREPARSER ./calc input | 1 + 2 * 3 + !* ++ ./calc.at:1350: $PREPARSER ./calc input stderr: stderr: 1.7: syntax error, unexpected '=' 1.14: memory exhausted ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: 1.14: memory exhausted 1.7: syntax error, unexpected '=' ./calc.at:1351: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1350: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1350: cat stderr ./calc.at:1351: cat stderr input: input: | | +1 | (#) + (#) = 2222 ./calc.at:1351: $PREPARSER ./calc input ./calc.at:1350: $PREPARSER ./calc input stderr: stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 2.1: syntax error, unexpected '+' ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' stderr: 2.1: syntax error, unexpected '+' ./calc.at:1350: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1351: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1351: cat stderr ./calc.at:1350: cat stderr ./calc.at:1351: $PREPARSER ./calc /dev/null input: stderr: 1.1: syntax error, unexpected end of file | (1 + #) = 1111 ./calc.at:1350: $PREPARSER ./calc input ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error, unexpected end of file stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1351: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1350: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1351: cat stderr ./calc.at:1350: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1351: $PREPARSER ./calc input input: stderr: | (# + 1) = 1111 ./calc.at:1350: $PREPARSER ./calc input 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1351: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1350: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1350: cat stderr ./calc.at:1351: cat stderr input: input: | (!!) + (1 2) = 1 ./calc.at:1351: $PREPARSER ./calc input | (1 + # + 1) = 1111 ./calc.at:1350: $PREPARSER ./calc input stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 stderr: ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 1.6: syntax error: invalid character: '#' ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 1.6: syntax error: invalid character: '#' ./calc.at:1350: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1351: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1350: cat stderr ./calc.at:1351: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1350: $PREPARSER ./calc input input: stderr: | (- *) + (1 2) = 1 ./calc.at:1351: $PREPARSER ./calc input 1.11-17: error: null divisor ./calc.at:1350: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 stderr: 1.11-17: error: null divisor ./calc.at:1351: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1350: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1351: cat stderr ./calc.at:1350: cat stderr input: | (* *) + (*) + (*) ./calc.at:1351: $PREPARSER ./calc input 482. calc.at:1350: ok stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1351: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1351: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1351: $PREPARSER ./calc input stderr: ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1351: $EGREP -c -v 'Return for a new token:|LAC:' stderr 485. calc.at:1354: testing Calculator parse.error=detailed %debug %locations %header %name-prefix "calc" %verbose %yacc ... ./calc.at:1354: if "$POSIXLY_CORRECT_IS_EXPORTED"; then sed -e '/\/\* !POSIX \*\//d' calc.y.tmp >calc.y else mv calc.y.tmp calc.y fi input: ./calc.at:1354: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y | 1 + 2 * 3 + !- ++ ./calc.at:1351: $PREPARSER ./calc input stderr: ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1351: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1351: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1351: $PREPARSER ./calc input stderr: 1.14: memory exhausted ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.14: memory exhausted ./calc.at:1351: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1351: cat stderr input: | (#) + (#) = 2222 ./calc.at:1351: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1351: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1351: cat stderr input: | (1 + #) = 1111 ./calc.at:1351: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1354: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS 1.6: syntax error: invalid character: '#' ./calc.at:1351: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1351: cat stderr input: | (# + 1) = 1111 ./calc.at:1351: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1351: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1351: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1351: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1351: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1351: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1351: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1351: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: 1.11-17: error: null divisor stdout: ./calc.at:1353: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1351: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1353: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c ./calc.at:1351: cat stderr 483. calc.at:1351: ok input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1353: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 7) Shifting token "number" (1.1: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 7) -> $$ = nterm exp (1.1: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 7) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 7) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 7) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: -3) -> $$ = nterm exp (1.1: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: -6) -> $$ = nterm exp (1.1: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (1.1: 5) Shifting token "number" (1.1: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 5) -> $$ = nterm exp (1.1: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 5) -> $$ = nterm exp (1.1: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: -5) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -5) -> $$ = nterm exp (1.1: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: -5) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 1) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: -1) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: -1) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: -1) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: -1) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 1) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 1) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: -1) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: -1) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: 1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: -1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (1.1: 4) Shifting token "number" (1.1: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 4) -> $$ = nterm exp (1.1: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 4) -> $$ = nterm exp (1.1: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: -4) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -4) -> $$ = nterm exp (1.1: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: -4) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: 2) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: -1) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: 1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 2) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 2) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 2) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 8) -> $$ = nterm exp (1.1: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (1.1: 256) Shifting token "number" (1.1: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 256) -> $$ = nterm exp (1.1: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 256) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 256) -> $$ = nterm exp (1.1: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 256) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 2) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: 4) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 4) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (1.1: 64) Shifting token "number" (1.1: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 64) -> $$ = nterm exp (1.1: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 64) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 64) -> $$ = nterm exp (1.1: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 64) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 7) Shifting token "number" (1.1: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 7) -> $$ = nterm exp (1.1: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 7) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 7) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 7) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: -3) -> $$ = nterm exp (1.1: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: -6) -> $$ = nterm exp (1.1: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (1.1: 5) Shifting token "number" (1.1: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 5) -> $$ = nterm exp (1.1: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 5) -> $$ = nterm exp (1.1: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: -5) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -5) -> $$ = nterm exp (1.1: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: -5) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 1) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: -1) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: -1) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: -1) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: -1) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 1) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 1) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: -1) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: -1) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: 1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: -1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (1.1: 4) Shifting token "number" (1.1: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 4) -> $$ = nterm exp (1.1: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 4) -> $$ = nterm exp (1.1: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: -4) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -4) -> $$ = nterm exp (1.1: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: -4) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: 2) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: -1) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: 1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 2) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 2) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 2) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 8) -> $$ = nterm exp (1.1: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (1.1: 256) Shifting token "number" (1.1: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 256) -> $$ = nterm exp (1.1: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 256) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 256) -> $$ = nterm exp (1.1: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 256) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 2) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: 4) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 12 (line 117): $1 = nterm exp (1.1: 4) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (1.1: 64) Shifting token "number" (1.1: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 64) -> $$ = nterm exp (1.1: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 64) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 64) -> $$ = nterm exp (1.1: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 64) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1353: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token "number" (1.1: 2) syntax error Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token "number" (1.1: 2) Stack now 0 ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 486. calc.at:1355: testing Calculator parse.error=detailed %debug %locations %header api.prefix={calc} %verbose %yacc ... ./calc.at:1355: if "$POSIXLY_CORRECT_IS_EXPORTED"; then sed -e '/\/\* !POSIX \*\//d' calc.y.tmp >calc.y else mv calc.y.tmp calc.y fi stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token "number" (1.1: 2) syntax error Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token "number" (1.1: 2) Stack now 0 ./calc.at:1355: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1353: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1353: cat stderr input: | 1//2 ./calc.at:1353: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.1: ) Shifting token '/' (1.1: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.1: ) syntax error Error: popping token '/' (1.1: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.1: ) Stack now 0 ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.1: ) Shifting token '/' (1.1: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.1: ) syntax error Error: popping token '/' (1.1: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.1: ) Stack now 0 ./calc.at:1353: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1353: cat stderr input: | error ./calc.at:1353: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "invalid token" (1.1: ) syntax error Cleanup: discarding lookahead token "invalid token" (1.1: ) Stack now 0 ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "invalid token" (1.1: ) syntax error Cleanup: discarding lookahead token "invalid token" (1.1: ) Stack now 0 ./calc.at:1353: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1353: cat stderr input: | 1 = 2 = 3 ./calc.at:1353: $PREPARSER ./calc input stderr: ./calc.at:1355: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.1: ) syntax error Error: popping nterm exp (1.1: 2) Stack now 0 8 19 Error: popping token '=' (1.1: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.1: ) Stack now 0 ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.1: ) syntax error Error: popping nterm exp (1.1: 2) Stack now 0 8 19 Error: popping token '=' (1.1: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.1: ) Stack now 0 ./calc.at:1353: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1353: cat stderr input: | | +1 ./calc.at:1353: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (1.1: ) syntax error Error: popping nterm input (1.1: ) Stack now 0 Cleanup: discarding lookahead token '+' (1.1: ) Stack now 0 ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (1.1: ) syntax error Error: popping nterm input (1.1: ) Stack now 0 Cleanup: discarding lookahead token '+' (1.1: ) Stack now 0 ./calc.at:1353: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1353: cat stderr ./calc.at:1353: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. syntax error Cleanup: discarding lookahead token "end of input" (1.1: ) Stack now 0 ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. syntax error Cleanup: discarding lookahead token "end of input" (1.1: ) Stack now 0 ./calc.at:1353: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1353: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1353: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 2) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.1: ) syntax error Error: popping token '+' (1.1: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.1: 3) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 2222) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 1) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.1: ) syntax error Error: popping token '*' (1.1: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.1: 2) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 3333) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 4444) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) error: 4444 != 1 -> $$ = nterm exp (1.1: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 4444) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 2) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.1: ) syntax error Error: popping token '+' (1.1: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.1: 3) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 2222) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 1) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.1: ) syntax error Error: popping token '*' (1.1: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.1: 2) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 3333) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 4444) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) error: 4444 != 1 -> $$ = nterm exp (1.1: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 4444) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1353: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1353: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 121): $1 = token '!' (1.1: ) $2 = token '!' (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.1: 2) syntax error Error: popping nterm exp (1.1: 1) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.1: 2) Error: discarding token "number" (1.1: 2) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 2222) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) error: 2222 != 1 -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2222) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 121): $1 = token '!' (1.1: ) $2 = token '!' (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.1: 2) syntax error Error: popping nterm exp (1.1: 1) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.1: 2) Error: discarding token "number" (1.1: 2) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 2222) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) error: 2222 != 1 -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2222) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1353: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1353: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 120): $1 = token '-' (1.1: ) $2 = token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.1: 2) syntax error Error: popping nterm exp (1.1: 1) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.1: 2) Error: discarding token "number" (1.1: 2) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 2222) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) error: 2222 != 1 -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2222) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 120): $1 = token '-' (1.1: ) $2 = token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.1: 2) syntax error Error: popping nterm exp (1.1: 1) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.1: 2) Error: discarding token "number" (1.1: 2) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 2222) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) error: 2222 != 1 -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2222) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1353: cat stderr input: | (* *) + (*) + (*) ./calc.at:1353: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 2222) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 3333) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 2222) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 3333) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1353: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1353: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 122): $1 = token '!' (1.1: ) $2 = token '+' (1.1: ) Stack now 0 8 21 Cleanup: popping token '+' (1.1: ) Cleanup: popping nterm exp (1.1: 7) ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 122): $1 = token '!' (1.1: ) $2 = token '+' (1.1: ) Stack now 0 8 21 Cleanup: popping token '+' (1.1: ) Cleanup: popping nterm exp (1.1: 7) ./calc.at:1353: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1353: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 123): $1 = token '!' (1.1: ) $2 = token '-' (1.1: ) Stack now 0 8 21 Cleanup: popping token '+' (1.1: ) Cleanup: popping nterm exp (1.1: 7) ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 123): $1 = token '!' (1.1: ) $2 = token '-' (1.1: ) Stack now 0 8 21 Cleanup: popping token '+' (1.1: ) Cleanup: popping nterm exp (1.1: 7) ./calc.at:1353: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1353: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1353: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 124): $1 = token '!' (1.1: ) $2 = token '*' (1.1: ) memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.1: ) Cleanup: popping nterm exp (1.1: 7) ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.1: ) Reducing stack by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 124): $1 = token '!' (1.1: ) $2 = token '*' (1.1: ) memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.1: ) Cleanup: popping nterm exp (1.1: 7) ./calc.at:1353: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1353: cat stderr input: | (#) + (#) = 2222 ./calc.at:1353: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token syntax error: invalid character: '#' Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token syntax error: invalid character: '#' Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 2222) Shifting token "number" (1.1: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2222) -> $$ = nterm exp (1.1: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 2222) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 2222) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2222) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token syntax error: invalid character: '#' Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 21 4 Reading a token syntax error: invalid character: '#' Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 8 21 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 2222) Shifting token "number" (1.1: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 2222) -> $$ = nterm exp (1.1: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 2222) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 2222) -> $$ = nterm exp (1.1: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2222) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1353: cat stderr input: | (1 + #) = 1111 ./calc.at:1353: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 4 12 21 Reading a token syntax error: invalid character: '#' Error: popping token '+' (1.1: ) Stack now 0 4 12 Error: popping nterm exp (1.1: 1) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1111) Shifting token "number" (1.1: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 1111) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 1111) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 4 12 21 Reading a token syntax error: invalid character: '#' Error: popping token '+' (1.1: ) Stack now 0 4 12 Error: popping nterm exp (1.1: 1) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1111) Shifting token "number" (1.1: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 1111) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 1111) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1353: cat stderr input: | (# + 1) = 1111 ./calc.at:1353: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token syntax error: invalid character: '#' Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.1: ) Error: discarding token '+' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.1: 1) Error: discarding token "number" (1.1: 1) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1111) Shifting token "number" (1.1: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 1111) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 1111) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token syntax error: invalid character: '#' Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.1: ) Error: discarding token '+' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.1: 1) Error: discarding token "number" (1.1: 1) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1111) Shifting token "number" (1.1: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 1111) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 1111) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1353: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1353: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 4 12 21 Reading a token syntax error: invalid character: '#' Error: popping token '+' (1.1: ) Stack now 0 4 12 Error: popping nterm exp (1.1: 1) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.1: ) Error: discarding token '+' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.1: 1) Error: discarding token "number" (1.1: 1) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1111) Shifting token "number" (1.1: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 1111) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 1111) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 4 12 21 Reading a token syntax error: invalid character: '#' Error: popping token '+' (1.1: ) Stack now 0 4 12 Error: popping nterm exp (1.1: 1) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.1: ) Error: discarding token "invalid token" (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.1: ) Error: discarding token '+' (1.1: ) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.1: 1) Error: discarding token "number" (1.1: 1) Error: popping token error (1.1: ) Stack now 0 4 Shifting token error (1.1: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.1: 1111) Shifting token "number" (1.1: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 6 (line 82): $1 = nterm exp (1.1: 1111) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 1111) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1353: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1353: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: 2) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.1: ) Shifting token '/' (1.1: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: 1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: 0) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 10 (line 101): $1 = nterm exp (1.1: 2) $2 = token '/' (1.1: ) $3 = nterm exp (1.1: 0) error: null divisor -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: 2) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.1: ) Shifting token '/' (1.1: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.1: ) Reducing stack by rule 8 (line 99): $1 = nterm exp (1.1: 1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: 0) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.1: ) Reducing stack by rule 10 (line 101): $1 = nterm exp (1.1: 2) $2 = token '/' (1.1: ) $3 = nterm exp (1.1: 0) error: null divisor -> $$ = nterm exp (1.1: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 77): $1 = nterm exp (1.1: 2) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1353: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1353: cat stderr 484. calc.at:1353: ok 487. calc.at:1357: testing Calculator api.pure=full parse.error=detailed %debug %locations %header %name-prefix "calc" %verbose %yacc ... ./calc.at:1357: if "$POSIXLY_CORRECT_IS_EXPORTED"; then sed -e '/\/\* !POSIX \*\//d' calc.y.tmp >calc.y else mv calc.y.tmp calc.y fi ./calc.at:1357: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1357: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS stderr: stdout: ./calc.at:1354: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1354: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1354: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 136): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 136): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 101): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 136): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 136): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 101): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 136): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 136): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 136): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 136): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 136): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 136): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 101): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 101): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (14.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 136): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 136): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 101): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 136): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 136): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 101): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 136): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 136): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 136): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 136): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 136): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 136): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 101): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 101): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (14.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1354: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1354: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1354: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1354: cat stderr input: | 1//2 ./calc.at:1354: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1354: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1354: cat stderr input: | error ./calc.at:1354: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1354: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1354: cat stderr input: | 1 = 2 = 3 ./calc.at:1354: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 19 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 19 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1354: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1354: cat stderr input: | | +1 ./calc.at:1354: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1354: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1354: cat stderr ./calc.at:1354: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of file Cleanup: discarding lookahead token end of file (1.1: ) Stack now 0 ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of file Cleanup: discarding lookahead token end of file (1.1: ) Stack now 0 ./calc.at:1354: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1354: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1354: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 21 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 21 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 21 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 21 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1354: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: stdout: ./calc.at:1355: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1354: cat stderr ./calc.at:1355: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h input: | (!!) + (1 2) = 1 ./calc.at:1354: $PREPARSER ./calc input input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 stderr: ./calc.at:1355: $PREPARSER ./calc input Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 141): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 141): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1354: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: ./calc.at:1354: cat stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 136): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 136): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 101): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 136): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 136): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 101): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 136): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 136): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 136): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 136): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 136): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 136): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 101): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 101): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (14.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 136): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 136): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 101): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 136): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 136): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 101): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 136): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 136): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 136): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 136): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 136): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 136): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 101): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 101): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 137): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (14.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1355: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | (- *) + (1 2) = 1 ./calc.at:1354: $PREPARSER ./calc input input: | 1 2 ./calc.at:1355: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 140): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 140): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1355: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1354: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1354: cat stderr ./calc.at:1355: cat stderr input: input: | (* *) + (*) + (*) ./calc.at:1354: $PREPARSER ./calc input | 1//2 ./calc.at:1355: $PREPARSER ./calc input stderr: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 21 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 21 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 21 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 21 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1355: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1354: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1355: cat stderr ./calc.at:1354: cat stderr input: | error ./calc.at:1355: $PREPARSER ./calc input input: | 1 + 2 * 3 + !+ ++ ./calc.at:1354: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 142): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 142): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1354: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1355: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 + 2 * 3 + !- ++ ./calc.at:1355: cat stderr ./calc.at:1354: $PREPARSER ./calc input stderr: input: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 143): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | 1 = 2 = 3 ./calc.at:1355: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 19 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 143): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 19 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1354: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1355: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1354: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1354: $PREPARSER ./calc input ./calc.at:1355: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 144): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: | | +1 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 144): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1355: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1354: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1354: cat stderr ./calc.at:1355: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (#) + (#) = 2222 ./calc.at:1354: $PREPARSER ./calc input ./calc.at:1355: cat stderr stderr: ./calc.at:1355: $PREPARSER ./calc /dev/null Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 21 4 Reading a token 1.8: syntax error: invalid character: '#' Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.8: ) Stack now 0 8 21 4 Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of file Cleanup: discarding lookahead token end of file (1.1: ) Stack now 0 ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 21 4 Reading a token 1.8: syntax error: invalid character: '#' Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.8: ) Stack now 0 8 21 4 Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of file Cleanup: discarding lookahead token end of file (1.1: ) Stack now 0 ./calc.at:1354: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1355: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1354: cat stderr ./calc.at:1355: cat stderr input: | (1 + #) = 1111 ./calc.at:1354: $PREPARSER ./calc input input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1355: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 21 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 21 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1354: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 21 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 21 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1354: cat stderr input: ./calc.at:1355: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | (# + 1) = 1111 ./calc.at:1354: $PREPARSER ./calc input stderr: ./calc.at:1355: cat stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (!!) + (1 2) = 1 ./calc.at:1355: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 141): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1354: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 141): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1354: cat stderr ./calc.at:1355: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (1 + # + 1) = 1111 ./calc.at:1354: $PREPARSER ./calc input ./calc.at:1355: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (- *) + (1 2) = 1 ./calc.at:1355: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 140): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1354: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 140): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1354: cat stderr ./calc.at:1355: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (1 + 1) / (1 - 1) ./calc.at:1354: $PREPARSER ./calc input ./calc.at:1355: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 121): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (* *) + (*) + (*) ./calc.at:1355: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 121): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 21 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 21 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1354: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 21 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 21 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1354: cat stderr ./calc.at:1355: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 485. calc.at:1354: ok ./calc.at:1355: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1355: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 142): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 142): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1355: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1355: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 143): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 143): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1355: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1355: cat stderr 488. calc.at:1358: testing Calculator api.push-pull=both api.pure=full parse.error=detailed %debug %locations %header api.prefix={calc} %verbose %yacc ... ./calc.at:1358: if "$POSIXLY_CORRECT_IS_EXPORTED"; then sed -e '/\/\* !POSIX \*\//d' calc.y.tmp >calc.y else mv calc.y.tmp calc.y fi input: | 1 + 2 * 3 + !* ++ ./calc.at:1358: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1355: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 144): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 120): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 144): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1355: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1355: cat stderr input: | (#) + (#) = 2222 ./calc.at:1355: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 21 4 Reading a token 1.8: syntax error: invalid character: '#' Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.8: ) Stack now 0 8 21 4 Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 21 4 Reading a token 1.8: syntax error: invalid character: '#' Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.8: ) Stack now 0 8 21 4 Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1355: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1355: cat stderr input: | (1 + #) = 1111 ./calc.at:1355: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1355: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1355: cat stderr ./calc.at:1358: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS input: | (# + 1) = 1111 ./calc.at:1355: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1355: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1355: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1355: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 139): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1355: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1355: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1355: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 121): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stdout: stderr: ./calc.at:1357: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 118): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 119): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 138): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 121): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h ./calc.at:1355: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1357: $PREPARSER ./calc input ./calc.at:1355: cat stderr 486. calc.at:1355: ok stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 124): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 101): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 101): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 124): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 101): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 101): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (14.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 124): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 101): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 101): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 124): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 101): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 101): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (14.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1357: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1357: $PREPARSER ./calc input 489. calc.at:1360: testing Calculator api.pure parse.error=detailed %debug %locations %header api.prefix={calc} %verbose %yacc %parse-param {semantic_value *result}{int *count}{int *nerrs} ... ./calc.at:1360: if "$POSIXLY_CORRECT_IS_EXPORTED"; then sed -e '/\/\* !POSIX \*\//d' calc.y.tmp >calc.y else mv calc.y.tmp calc.y fi stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1360: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1357: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1357: cat stderr input: | 1//2 ./calc.at:1357: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1357: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1357: cat stderr input: | error ./calc.at:1357: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1357: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1357: cat stderr input: | 1 = 2 = 3 ./calc.at:1357: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 19 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 19 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1357: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1360: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS ./calc.at:1357: cat stderr input: | | +1 ./calc.at:1357: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1357: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1357: cat stderr ./calc.at:1357: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of file Cleanup: discarding lookahead token end of file (1.1: ) Stack now 0 ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of file Cleanup: discarding lookahead token end of file (1.1: ) Stack now 0 ./calc.at:1357: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1357: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1357: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 21 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 21 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 21 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 21 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1357: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1357: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 129): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 129): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1357: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1357: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 128): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 128): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1357: cat stderr input: | (* *) + (*) + (*) ./calc.at:1357: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 21 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 21 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 21 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 21 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1357: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1357: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 130): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 130): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1357: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1357: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 131): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 131): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1357: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1357: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1357: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 132): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 132): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1357: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1357: cat stderr input: | (#) + (#) = 2222 ./calc.at:1357: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 21 4 Reading a token 1.8: syntax error: invalid character: '#' Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.8: ) Stack now 0 8 21 4 Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 21 4 Reading a token 1.8: syntax error: invalid character: '#' Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.8: ) Stack now 0 8 21 4 Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1357: cat stderr input: | (1 + #) = 1111 ./calc.at:1357: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1357: cat stderr input: | (# + 1) = 1111 ./calc.at:1357: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1357: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1357: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1357: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1357: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 115): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 115): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1357: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1357: cat stderr 487. calc.at:1357: ok 490. calc.at:1362: testing Calculator %no-lines api.pure parse.error=detailed %debug %locations %header api.prefix={calc} %verbose %yacc %parse-param {semantic_value *result}{int *count}{int *nerrs} ... ./calc.at:1362: if "$POSIXLY_CORRECT_IS_EXPORTED"; then sed -e '/\/\* !POSIX \*\//d' calc.y.tmp >calc.y else mv calc.y.tmp calc.y fi ./calc.at:1362: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1362: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS stderr: stdout: stderr: ./calc.at:1358: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' stdout: ./calc.at:1360: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1358: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h ./calc.at:1360: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1358: $PREPARSER ./calc input input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1360: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Return for a new token: Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Return for a new token: Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Return for a new token: Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Stack now 0 8 19 28 Return for a new token: Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Return for a new token: Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Stack now 0 6 8 21 Return for a new token: Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Stack now 0 6 8 21 30 Return for a new token: Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Stack now 0 6 8 21 30 22 Return for a new token: Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Return for a new token: Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Return for a new token: Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 124): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Stack now 0 6 8 19 Return for a new token: Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 19 2 Return for a new token: Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 19 2 10 Return for a new token: Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Return for a new token: Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Return for a new token: Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Stack now 0 6 2 10 24 Return for a new token: Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 101): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Stack now 0 6 2 10 24 33 Return for a new token: Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Stack now 0 6 8 19 Return for a new token: Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 19 2 Return for a new token: Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Return for a new token: Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Return for a new token: Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Return for a new token: Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 101): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Return for a new token: Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 124): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Return for a new token: Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Stack now 0 6 8 24 Return for a new token: Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Stack now 0 6 8 24 33 Return for a new token: Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Stack now 0 6 8 19 Return for a new token: Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Stack now 0 6 8 19 28 Return for a new token: Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Return for a new token: Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Return for a new token: Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Return for a new token: Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Return for a new token: Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Stack now 0 6 8 19 Return for a new token: Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 19 2 Return for a new token: Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Return for a new token: Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Return for a new token: Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Stack now 0 6 8 20 Return for a new token: Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Stack now 0 6 8 20 29 Return for a new token: Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Stack now 0 6 8 20 Return for a new token: Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Stack now 0 6 8 20 29 Return for a new token: Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Stack now 0 6 8 19 Return for a new token: Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 19 2 Return for a new token: Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 19 2 10 Return for a new token: Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Return for a new token: Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Stack now 0 6 8 20 Return for a new token: Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 20 4 Return for a new token: Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 101): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 20 4 12 Return for a new token: Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Return for a new token: Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Return for a new token: Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Stack now 0 6 8 20 29 Return for a new token: Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Stack now 0 6 8 19 Return for a new token: Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Stack now 0 6 8 19 28 Return for a new token: Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Return for a new token: Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Stack now 0 6 8 24 Return for a new token: Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Stack now 0 6 8 24 33 Return for a new token: Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Stack now 0 6 8 24 33 24 Return for a new token: Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Return for a new token: Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Stack now 0 6 8 19 Return for a new token: Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Stack now 0 6 8 19 28 Return for a new token: Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Return for a new token: Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 101): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Return for a new token: Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Stack now 0 6 4 12 24 Return for a new token: Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Stack now 0 6 4 12 24 33 Return for a new token: Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Return for a new token: Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Stack now 0 6 8 24 Return for a new token: Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Stack now 0 6 8 24 33 Return for a new token: Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Stack now 0 6 8 19 Return for a new token: Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Stack now 0 6 8 19 28 Return for a new token: Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (14.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Return for a new token: Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Return for a new token: Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Return for a new token: Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Stack now 0 8 19 28 Return for a new token: Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Return for a new token: Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Stack now 0 6 8 21 Return for a new token: Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Stack now 0 6 8 21 30 Return for a new token: Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Stack now 0 6 8 21 30 22 Return for a new token: Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Return for a new token: Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Return for a new token: Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 124): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Stack now 0 6 8 19 Return for a new token: Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 19 2 Return for a new token: Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 19 2 10 Return for a new token: Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Return for a new token: Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Return for a new token: Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Stack now 0 6 2 10 24 Return for a new token: Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 101): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Stack now 0 6 2 10 24 33 Return for a new token: Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Stack now 0 6 8 19 Return for a new token: Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 19 2 Return for a new token: Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Return for a new token: Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Return for a new token: Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Return for a new token: Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 101): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Return for a new token: Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 124): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Return for a new token: Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Stack now 0 6 8 24 Return for a new token: Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Stack now 0 6 8 24 33 Return for a new token: Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Stack now 0 6 8 19 Return for a new token: Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Stack now 0 6 8 19 28 Return for a new token: Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Return for a new token: Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Return for a new token: Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Return for a new token: Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Return for a new token: Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Stack now 0 6 8 19 Return for a new token: Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 19 2 Return for a new token: Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Return for a new token: Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Return for a new token: Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Stack now 0 6 8 20 Return for a new token: Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Stack now 0 6 8 20 29 Return for a new token: Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Stack now 0 6 8 20 Return for a new token: Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Stack now 0 6 8 20 29 Return for a new token: Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Stack now 0 6 8 19 Return for a new token: Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 19 2 Return for a new token: Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 19 2 10 Return for a new token: Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Return for a new token: Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Stack now 0 6 8 20 Return for a new token: Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 20 4 Return for a new token: Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 101): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 20 4 12 Return for a new token: Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Return for a new token: Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Return for a new token: Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Stack now 0 6 8 20 29 Return for a new token: Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Stack now 0 6 8 19 Return for a new token: Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Stack now 0 6 8 19 28 Return for a new token: Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Return for a new token: Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Stack now 0 6 8 24 Return for a new token: Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Stack now 0 6 8 24 33 Return for a new token: Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Stack now 0 6 8 24 33 24 Return for a new token: Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Return for a new token: Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Stack now 0 6 8 19 Return for a new token: Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Stack now 0 6 8 19 28 Return for a new token: Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Return for a new token: Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 101): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Return for a new token: Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Stack now 0 6 4 12 24 Return for a new token: Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Stack now 0 6 4 12 24 33 Return for a new token: Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Return for a new token: Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Stack now 0 6 8 24 Return for a new token: Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Stack now 0 6 8 24 33 Return for a new token: Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Stack now 0 6 8 19 Return for a new token: Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Stack now 0 6 8 19 28 Return for a new token: Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (14.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1358: $EGREP -c -v 'Return for a new token:|LAC:' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 124): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 101): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 101): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 124): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 101): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 101): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (14.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 2 stderr: ./calc.at:1358: $PREPARSER ./calc input Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 124): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 101): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 101): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 124): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 101): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 101): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (14.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1360: $EGREP -c -v 'Return for a new token:|LAC:' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 2 ./calc.at:1360: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1358: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1358: cat stderr ./calc.at:1360: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1//2 ./calc.at:1358: $PREPARSER ./calc input ./calc.at:1360: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Stack now 0 8 23 Return for a new token: Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: | 1//2 ./calc.at:1360: $PREPARSER ./calc input Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Stack now 0 8 23 Return for a new token: Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1358: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1358: cat stderr input: | error ./calc.at:1358: $PREPARSER ./calc input ./calc.at:1360: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1360: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 input: | error ./calc.at:1360: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1358: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1358: cat stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 input: | 1 = 2 = 3 ./calc.at:1360: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1358: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Stack now 0 8 19 Return for a new token: Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Stack now 0 8 19 28 Return for a new token: Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 19 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1360: cat stderr ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Stack now 0 8 19 Return for a new token: Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Stack now 0 8 19 28 Return for a new token: Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 19 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 | 1 = 2 = 3 ./calc.at:1360: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 19 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1358: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 19 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1358: cat stderr input: | | +1 ./calc.at:1360: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1358: $PREPARSER ./calc input stderr: ./calc.at:1360: cat stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | | +1 stderr: ./calc.at:1360: $PREPARSER ./calc input Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1358: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1358: cat stderr ./calc.at:1358: $PREPARSER ./calc /dev/null ./calc.at:1360: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of file Cleanup: discarding lookahead token end of file (1.1: ) Stack now 0 ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1360: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of file Cleanup: discarding lookahead token end of file (1.1: ) Stack now 0 ./calc.at:1360: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of file Cleanup: discarding lookahead token end of file (1.1: ) Stack now 0 ./calc.at:1358: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1358: cat stderr Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of file Cleanup: discarding lookahead token end of file (1.1: ) Stack now 0 input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1358: $PREPARSER ./calc input ./calc.at:1360: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1360: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Return for a new token: Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 21 4 Return for a new token: Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 21 4 12 Return for a new token: Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Stack now 0 8 21 4 12 21 Return for a new token: Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Return for a new token: Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 4 12 21 Return for a new token: Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Return for a new token: Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Stack now 0 8 21 4 12 21 Return for a new token: Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 21 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 21 4 Return for a new token: Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Return for a new token: Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 21 4 11 Return for a new token: Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 21 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 21 4 11 Return for a new token: Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 21 4 Return for a new token: Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 21 4 12 Return for a new token: Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Stack now 0 8 21 4 12 22 Return for a new token: Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Return for a new token: Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Stack now 0 8 21 4 12 22 Return for a new token: Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Return for a new token: Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Stack now 0 8 19 Return for a new token: Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Stack now 0 8 19 28 Return for a new token: Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Return for a new token: Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 21 4 Return for a new token: Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 21 4 12 Return for a new token: Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Stack now 0 8 21 4 12 21 Return for a new token: Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Return for a new token: Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 4 12 21 Return for a new token: Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Return for a new token: Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Stack now 0 8 21 4 12 21 Return for a new token: Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 21 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 21 4 Return for a new token: Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Return for a new token: Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 21 4 11 Return for a new token: Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 21 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 21 4 11 Return for a new token: Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 21 4 Return for a new token: Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 21 4 12 Return for a new token: Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Stack now 0 8 21 4 12 22 Return for a new token: Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Return for a new token: Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Stack now 0 8 21 4 12 22 Return for a new token: Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Return for a new token: Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Stack now 0 8 19 Return for a new token: Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Stack now 0 8 19 28 Return for a new token: Reading a token Next token is token '\n' (1.47-2.0: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1360: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 21 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 21 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1358: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1358: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 21 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 21 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | (!!) + (1 2) = 1 ./calc.at:1358: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Return for a new token: Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Return for a new token: Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 129): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 21 4 Return for a new token: Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 21 4 12 Return for a new token: Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Return for a new token: Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Stack now 0 8 19 Return for a new token: Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 19 28 Return for a new token: Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1360: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: ./calc.at:1360: cat stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Return for a new token: Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Return for a new token: Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 129): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 21 4 Return for a new token: Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 21 4 12 Return for a new token: Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Return for a new token: Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Stack now 0 8 19 Return for a new token: Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 19 28 Return for a new token: Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | (!!) + (1 2) = 1 ./calc.at:1360: $PREPARSER ./calc input ./calc.at:1358: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1358: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 129): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | (- *) + (1 2) = 1 ./calc.at:1358: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 129): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Return for a new token: Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Return for a new token: Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 128): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Return for a new token: Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 21 4 12 Return for a new token: Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Return for a new token: Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Stack now 0 8 19 Return for a new token: Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Stack now 0 8 19 28 Return for a new token: Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1360: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Return for a new token: Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Return for a new token: Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 128): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Return for a new token: Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 21 4 12 Return for a new token: Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Return for a new token: Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Stack now 0 8 19 Return for a new token: Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Stack now 0 8 19 28 Return for a new token: Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1360: cat stderr ./calc.at:1358: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: ./calc.at:1358: cat stderr | (- *) + (1 2) = 1 ./calc.at:1360: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 128): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | (* *) + (*) + (*) ./calc.at:1358: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 128): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Return for a new token: Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Return for a new token: Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 21 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Return for a new token: Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 21 4 Return for a new token: Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 21 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Return for a new token: Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Return for a new token: Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Return for a new token: Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 21 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Return for a new token: Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 21 4 Return for a new token: Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 21 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Return for a new token: Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1360: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1358: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1360: cat stderr ./calc.at:1358: cat stderr input: | (* *) + (*) + (*) ./calc.at:1360: $PREPARSER ./calc input input: | 1 + 2 * 3 + !+ ++ ./calc.at:1358: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 21 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 21 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Return for a new token: Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Return for a new token: Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Return for a new token: Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 130): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 21 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 21 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Return for a new token: Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Return for a new token: Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Return for a new token: Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 130): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1358: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1360: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 + 2 * 3 + !- ++ ./calc.at:1358: $PREPARSER ./calc input ./calc.at:1360: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Return for a new token: Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Return for a new token: Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Return for a new token: Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 131): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Return for a new token: Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Return for a new token: Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Return for a new token: Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 131): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) input: | 1 + 2 * 3 + !+ ++ ./calc.at:1360: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 130): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1358: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1358: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 130): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1360: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1358: $PREPARSER ./calc input input: stderr: | 1 + 2 * 3 + !- ++ ./calc.at:1360: $PREPARSER ./calc input Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Return for a new token: Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Return for a new token: Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Return for a new token: Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 132): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 131): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Return for a new token: Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Return for a new token: Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Return for a new token: Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 132): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 131): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1358: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1358: cat stderr ./calc.at:1360: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: ./calc.at:1360: cat stderr | (#) + (#) = 2222 ./calc.at:1358: $PREPARSER ./calc input input: | 1 + 2 * 3 + !* ++ stderr: ./calc.at:1360: $PREPARSER ./calc input Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Return for a new token: 1.2: syntax error: invalid character: '#' Reading a token Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 21 4 Return for a new token: 1.8: syntax error: invalid character: '#' Reading a token Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.8: ) Stack now 0 8 21 4 Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Return for a new token: Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Return for a new token: Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Stack now 0 8 19 28 Return for a new token: Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 132): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Return for a new token: 1.2: syntax error: invalid character: '#' Reading a token Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Stack now 0 8 21 Return for a new token: Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 21 4 Return for a new token: 1.8: syntax error: invalid character: '#' Reading a token Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.8: ) Stack now 0 8 21 4 Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Return for a new token: Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Stack now 0 8 21 30 Return for a new token: Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Return for a new token: Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Stack now 0 8 19 28 Return for a new token: Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 132): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1358: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1360: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1358: cat stderr ./calc.at:1360: cat stderr input: input: | (1 + #) = 1111 | (#) + (#) = 2222 ./calc.at:1360: $PREPARSER ./calc input ./calc.at:1358: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Return for a new token: Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Return for a new token: Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Return for a new token: 1.6: syntax error: invalid character: '#' Reading a token Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Return for a new token: Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Return for a new token: Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 21 4 Reading a token 1.8: syntax error: invalid character: '#' Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.8: ) Stack now 0 8 21 4 Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Return for a new token: Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Return for a new token: Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Return for a new token: 1.6: syntax error: invalid character: '#' Reading a token Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Return for a new token: Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Return for a new token: Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 21 4 Reading a token 1.8: syntax error: invalid character: '#' Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.8: ) Stack now 0 8 21 4 Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1358: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1360: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1360: cat stderr ./calc.at:1358: cat stderr input: | (1 + #) = 1111 ./calc.at:1360: $PREPARSER ./calc input input: | (# + 1) = 1111 stderr: ./calc.at:1358: $PREPARSER ./calc input Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Return for a new token: 1.2: syntax error: invalid character: '#' Reading a token Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Return for a new token: Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Return for a new token: Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Return for a new token: 1.2: syntax error: invalid character: '#' Reading a token Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Return for a new token: Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Return for a new token: Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1360: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1358: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1360: cat stderr ./calc.at:1358: cat stderr input: | (# + 1) = 1111 ./calc.at:1360: $PREPARSER ./calc input input: stderr: | (1 + # + 1) = 1111 ./calc.at:1358: $PREPARSER ./calc input Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Return for a new token: Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Return for a new token: Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Return for a new token: 1.6: syntax error: invalid character: '#' Reading a token Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Stack now 0 8 19 Return for a new token: Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Stack now 0 8 19 28 Return for a new token: Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Return for a new token: Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Return for a new token: Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Return for a new token: 1.6: syntax error: invalid character: '#' Reading a token Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Return for a new token: Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Stack now 0 8 19 Return for a new token: Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Stack now 0 8 19 28 Return for a new token: Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1360: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1358: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1360: cat stderr ./calc.at:1358: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1360: $PREPARSER ./calc input input: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) | (1 + 1) / (1 - 1) ./calc.at:1358: $PREPARSER ./calc input ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Return for a new token: Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Return for a new token: Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Return for a new token: Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Stack now 0 4 12 21 30 Return for a new token: Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Stack now 0 8 23 Return for a new token: Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 23 4 Return for a new token: Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 23 4 12 Return for a new token: Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Stack now 0 8 23 4 12 20 Return for a new token: Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Return for a new token: Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Stack now 0 8 23 32 Return for a new token: Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 115): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1358: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Return for a new token: Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Return for a new token: Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Return for a new token: Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Stack now 0 4 12 21 30 Return for a new token: Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Return for a new token: Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Stack now 0 8 23 Return for a new token: Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 23 4 Return for a new token: Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 23 4 12 Return for a new token: Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Stack now 0 8 23 4 12 20 Return for a new token: Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Return for a new token: Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Stack now 0 8 23 32 Return for a new token: Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 115): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Return for a new token: Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1358: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1360: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1360: cat stderr ./calc.at:1358: cat stderr input: 488. calc.at:1358: ok | (1 + 1) / (1 - 1) ./calc.at:1360: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 115): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 115): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1360: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1360: cat stderr 489. calc.at:1360: ok 491. calc.at:1363: testing Calculator %no-lines api.pure parse.error=verbose %debug %locations %header api.prefix={calc} %verbose %yacc %parse-param {semantic_value *result}{int *count}{int *nerrs} ... ./calc.at:1363: if "$POSIXLY_CORRECT_IS_EXPORTED"; then sed -e '/\/\* !POSIX \*\//d' calc.y.tmp >calc.y else mv calc.y.tmp calc.y fi ./calc.at:1363: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y 492. calc.at:1364: testing Calculator %no-lines api.pure parse.error=verbose %debug %locations %defines api.prefix={calc} %verbose %yacc %parse-param {semantic_value *result}{int *count}{int *nerrs} ... ./calc.at:1364: if "$POSIXLY_CORRECT_IS_EXPORTED"; then sed -e '/\/\* !POSIX \*\//d' calc.y.tmp >calc.y else mv calc.y.tmp calc.y fi ./calc.at:1364: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1363: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS ./calc.at:1364: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS stderr: stdout: ./calc.at:1362: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1362: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1362: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 124): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 101): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 101): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 124): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 101): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 101): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (14.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 124): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 101): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 101): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 124): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 101): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 101): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 124): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 101): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 101): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 101): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 101): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 101): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 125): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 92): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (14.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1362: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1362: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1362: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1362: cat stderr input: | 1//2 ./calc.at:1362: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1362: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1362: cat stderr input: | error ./calc.at:1362: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1362: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1362: cat stderr input: | 1 = 2 = 3 ./calc.at:1362: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 19 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 19 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1362: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1362: cat stderr input: | | +1 ./calc.at:1362: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 96): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1362: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1362: cat stderr ./calc.at:1362: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of file Cleanup: discarding lookahead token end of file (1.1: ) Stack now 0 ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of file Cleanup: discarding lookahead token end of file (1.1: ) Stack now 0 ./calc.at:1362: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1362: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1362: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 21 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 21 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 21 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 21 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1362: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1362: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1362: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 129): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 129): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1362: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1362: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1362: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 128): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 128): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1362: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1362: cat stderr input: | (* *) + (*) + (*) ./calc.at:1362: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 21 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 21 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 21 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 21 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1362: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1362: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1362: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 130): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 130): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1362: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1362: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 131): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 131): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1362: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1362: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1362: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 132): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 101): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 101): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 114): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 132): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1362: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1362: cat stderr input: | (#) + (#) = 2222 ./calc.at:1362: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 21 4 Reading a token 1.8: syntax error: invalid character: '#' Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.8: ) Stack now 0 8 21 4 Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 21 4 Reading a token 1.8: syntax error: invalid character: '#' Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.8: ) Stack now 0 8 21 4 Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1362: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1362: cat stderr input: | (1 + #) = 1111 ./calc.at:1362: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1362: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1362: cat stderr input: | (# + 1) = 1111 ./calc.at:1362: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1362: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1362: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1362: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 127): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 101): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 102): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1362: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1362: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1362: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 115): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 101): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 112): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 101): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 101): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 113): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 126): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 115): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 97): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 91): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token end of file (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1362: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1362: cat stderr 490. calc.at:1362: ok 493. calc.at:1367: testing Calculator parse.error=custom ... ./calc.at:1367: mv calc.y.tmp calc.y ./calc.at:1367: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1367: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS stderr: stdout: ./calc.at:1363: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1363: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1363: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 111): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 111): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 111): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 111): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 111): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 111): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 111): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 111): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 111): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 111): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 111): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 111): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 111): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 111): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 111): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 111): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 111): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 111): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 111): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 111): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1363: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1363: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token "number" (1.3: 2) Stack now 0 ./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token "number" (1.3: 2) Stack now 0 ./calc.at:1363: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1363: cat stderr input: stderr: stdout: ./calc.at:1364: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' | 1//2 ./calc.at:1363: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1364: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h ./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1364: $PREPARSER ./calc input Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1363: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1363: cat stderr stderr: input: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 111): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 111): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 111): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 111): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 111): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 111): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 111): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 111): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 111): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 111): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) | error ./calc.at:1363: $PREPARSER ./calc input stderr: ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) Stack now 0 ./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) Stack now 0 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Stack now 0 6 8 21 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Stack now 0 6 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Stack now 0 6 8 21 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Stack now 0 6 8 21 30 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 21 30 22 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Stack now 0 6 8 21 30 22 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 21 30 22 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 111): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Stack now 0 6 8 21 30 22 31 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Stack now 0 6 8 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 111): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Stack now 0 6 2 10 24 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Stack now 0 6 2 10 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Stack now 0 6 2 10 24 33 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 111): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 111): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 111): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 111): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 111): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 111): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 111): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 19 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Stack now 0 6 8 19 2 1 Reducing stack by rule 5 (line 88): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 19 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 111): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Stack now 0 6 8 19 28 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 20 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Stack now 0 6 8 20 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 20 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Stack now 0 6 8 20 4 12 20 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Stack now 0 6 8 20 4 12 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Stack now 0 6 8 20 4 12 20 29 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 20 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Stack now 0 6 8 20 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Stack now 0 6 8 24 33 24 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Stack now 0 6 8 24 33 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Stack now 0 6 8 24 33 24 33 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Stack now 0 6 8 24 33 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Stack now 0 6 4 12 24 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Stack now 0 6 4 12 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Stack now 0 6 4 12 24 33 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Stack now 0 6 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Stack now 0 6 8 24 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Stack now 0 6 8 24 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Stack now 0 6 8 24 33 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 112): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Stack now 0 6 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Stack now 0 6 18 Reducing stack by rule 2 (line 79): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1364: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1363: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1363: cat stderr input: | 1 2 ./calc.at:1364: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token "number" (1.3: 2) Stack now 0 input: ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | 1 = 2 = 3 ./calc.at:1363: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 19 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 stderr: ./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token "number" (1.3: 2) Stack now 0 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 19 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1364: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1363: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1364: cat stderr ./calc.at:1363: cat stderr input: | 1//2 ./calc.at:1364: $PREPARSER ./calc input input: stderr: | | +1 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1363: $PREPARSER ./calc input ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1364: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1364: cat stderr ./calc.at:1363: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | error ./calc.at:1364: $PREPARSER ./calc input ./calc.at:1363: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) Stack now 0 ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1363: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) Stack now 0 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) Stack now 0 ./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1364: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) Stack now 0 ./calc.at:1364: cat stderr ./calc.at:1363: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 = 2 = 3 ./calc.at:1363: cat stderr ./calc.at:1364: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 19 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1363: $PREPARSER ./calc input ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 19 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 21 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 21 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1364: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 21 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 21 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1364: cat stderr ./calc.at:1363: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | | +1 ./calc.at:1364: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1363: cat stderr ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: | (!!) + (1 2) = 1 ./calc.at:1363: $PREPARSER ./calc input Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 83): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 116): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1364: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 116): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1364: cat stderr ./calc.at:1363: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1364: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) Stack now 0 ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) Stack now 0 ./calc.at:1363: cat stderr ./calc.at:1364: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: ./calc.at:1364: cat stderr | (- *) + (1 2) = 1 ./calc.at:1363: $PREPARSER ./calc input input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1364: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 115): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 21 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 21 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 115): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Stack now 0 8 21 4 12 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Stack now 0 8 21 4 12 21 30 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 21 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Stack now 0 8 21 4 12 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 21 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 21 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 21 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Stack now 0 8 21 4 12 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Stack now 0 8 21 4 12 22 31 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 21 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Stack now 0 8 21 4 12 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 21 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 21 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1364: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1363: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1364: cat stderr ./calc.at:1363: cat stderr input: input: | (* *) + (*) + (*) | (!!) + (1 2) = 1 ./calc.at:1363: $PREPARSER ./calc input ./calc.at:1364: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 116): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 21 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 21 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Stack now 0 4 5 16 Reducing stack by rule 16 (line 116): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 21 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 21 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 21 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1364: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1363: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1364: cat stderr ./calc.at:1363: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1363: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 117): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 117): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1363: $EGREP -c -v 'Return for a new token:|LAC:' stderr | (- *) + (1 2) = 1 ./calc.at:1364: $PREPARSER ./calc input stderr: input: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 115): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) | 1 + 2 * 3 + !- ++ ./calc.at:1363: $PREPARSER ./calc input ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 118): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 115): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Stack now 0 8 21 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 21 4 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 21 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 118): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1364: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1363: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1364: cat stderr input: | (* *) + (*) + (*) ./calc.at:1363: cat stderr ./calc.at:1364: $PREPARSER ./calc input input: | 1 + 2 * 3 + !* ++ ./calc.at:1363: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 21 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 21 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 119): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 21 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 21 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 21 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 119): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1363: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1364: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1364: cat stderr input: ./calc.at:1363: cat stderr | 1 + 2 * 3 + !+ ++ ./calc.at:1364: $PREPARSER ./calc input input: stderr: | (#) + (#) = 2222 ./calc.at:1363: $PREPARSER ./calc input Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 117): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.2: ) Error: discarding token "invalid token" (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 21 4 Reading a token 1.8: syntax error: invalid character: '#' Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "invalid token" (1.8: ) Error: discarding token "invalid token" (1.8: ) Error: popping token error (1.8: ) Stack now 0 8 21 4 Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: ./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 21 5 14 Reducing stack by rule 17 (line 117): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1364: $EGREP -c -v 'Return for a new token:|LAC:' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.2: ) Error: discarding token "invalid token" (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 21 4 Reading a token 1.8: syntax error: invalid character: '#' Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "invalid token" (1.8: ) Error: discarding token "invalid token" (1.8: ) Error: popping token error (1.8: ) Stack now 0 8 21 4 Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: stderr: stdout: ./calc.at:1363: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1367: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' | 1 + 2 * 3 + !- ++ ./calc.at:1364: $PREPARSER ./calc input ./calc.at:1367: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c stderr: ./calc.at:1363: cat stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 118): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) input: ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | (1 + #) = 1111 ./calc.at:1363: $PREPARSER ./calc input input: stderr: stderr: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 21 5 13 Reducing stack by rule 18 (line 118): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1367: $PREPARSER ./calc input Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.6: ) Error: discarding token "invalid token" (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1364: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.6: ) Error: discarding token "invalid token" (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: ./calc.at:1367: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1364: cat stderr ./calc.at:1363: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: input: | 1 2 ./calc.at:1367: $PREPARSER ./calc input | 1 + 2 * 3 + !* ++ ./calc.at:1364: $PREPARSER ./calc input stderr: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1363: cat stderr ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 119): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) input: | (# + 1) = 1111 ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1363: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Stack now 0 8 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Stack now 0 8 21 30 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Stack now 0 8 21 30 22 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Stack now 0 8 21 30 22 31 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Stack now 0 8 21 30 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 21 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Stack now 0 8 21 5 15 Reducing stack by rule 19 (line 119): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Stack now 0 8 21 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.2: ) Error: discarding token "invalid token" (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1367: cat stderr ./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1//2 ./calc.at:1367: $PREPARSER ./calc input ./calc.at:1364: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.2: ) Error: discarding token "invalid token" (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1364: cat stderr ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) input: | (#) + (#) = 2222 ./calc.at:1364: $PREPARSER ./calc input ./calc.at:1363: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.2: ) Error: discarding token "invalid token" (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 21 4 Reading a token 1.8: syntax error: invalid character: '#' Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "invalid token" (1.8: ) Error: discarding token "invalid token" (1.8: ) Error: popping token error (1.8: ) Stack now 0 8 21 4 Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1363: cat stderr ./calc.at:1367: cat stderr ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.2: ) Error: discarding token "invalid token" (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Stack now 0 8 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 21 4 Reading a token 1.8: syntax error: invalid character: '#' Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Next token is token "invalid token" (1.8: ) Error: discarding token "invalid token" (1.8: ) Error: popping token error (1.8: ) Stack now 0 8 21 4 Shifting token error (1.8: ) Entering state 11 Stack now 0 8 21 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Stack now 0 8 21 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Stack now 0 8 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | error ./calc.at:1367: $PREPARSER ./calc input | (1 + # + 1) = 1111 ./calc.at:1363: $PREPARSER ./calc input stderr: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1364: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.6: ) Error: discarding token "invalid token" (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1364: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.6: ) Error: discarding token "invalid token" (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1367: cat stderr input: | (1 + #) = 1111 ./calc.at:1364: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.6: ) Error: discarding token "invalid token" (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | 1 = 2 = 3 ./calc.at:1367: $PREPARSER ./calc input stderr: ./calc.at:1363: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.6: ) Error: discarding token "invalid token" (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1363: cat stderr syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) ./calc.at:1364: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: ./calc.at:1367: cat stderr | (1 + 1) / (1 - 1) ./calc.at:1363: $PREPARSER ./calc input input: | | +1 ./calc.at:1367: $PREPARSER ./calc input stderr: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1364: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 102): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (# + 1) = 1111 ./calc.at:1364: $PREPARSER ./calc input stderr: stderr: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.2: ) Error: discarding token "invalid token" (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 102): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.2: ) Error: discarding token "invalid token" (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1364: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1363: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1367: cat stderr ./calc.at:1364: cat stderr ./calc.at:1363: cat stderr input: ./calc.at:1367: $PREPARSER ./calc /dev/null | (1 + # + 1) = 1111 ./calc.at:1364: $PREPARSER ./calc input 491. calc.at:1363: ok stderr: stderr: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.6: ) Error: discarding token "invalid token" (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token 1.6: syntax error: invalid character: '#' Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token "invalid token" (1.6: ) Error: discarding token "invalid token" (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Stack now 0 4 11 26 Reducing stack by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Stack now 0 8 19 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Stack now 0 8 19 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Stack now 0 8 19 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 89): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1364: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1367: cat stderr ./calc.at:1364: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1367: $PREPARSER ./calc input stderr: input: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) error: 4444 != 1 ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | (1 + 1) / (1 - 1) ./calc.at:1364: $PREPARSER ./calc input stderr: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 102): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) error: 4444 != 1 ./calc.at:1364: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Stack now 0 4 12 21 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Stack now 0 4 12 21 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Stack now 0 4 12 21 30 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 99): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Stack now 0 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Stack now 0 8 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 23 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Stack now 0 8 23 4 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 23 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Stack now 0 8 23 4 12 20 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Stack now 0 8 23 4 12 20 1 Reducing stack by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Stack now 0 8 23 4 12 20 29 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 100): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 23 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Stack now 0 8 23 4 12 27 Reducing stack by rule 13 (line 113): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Stack now 0 8 23 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 102): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Stack now 0 8 25 Reducing stack by rule 4 (line 84): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Stack now 0 6 17 Stack now 0 6 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1367: cat stderr ./calc.at:1364: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 494. calc.at:1368: testing Calculator parse.error=custom %locations api.prefix={calc} ... ./calc.at:1368: mv calc.y.tmp calc.y input: ./calc.at:1364: cat stderr ./calc.at:1368: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y | (!!) + (1 2) = 1 ./calc.at:1367: $PREPARSER ./calc input stderr: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) error: 2222 != 1 492. calc.at:1364: ok ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) error: 2222 != 1 ./calc.at:1367: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1367: $PREPARSER ./calc input stderr: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) error: 2222 != 1 ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) error: 2222 != 1 ./calc.at:1367: cat stderr input: 495. calc.at:1369: testing Calculator parse.error=custom %locations api.prefix={calc} %parse-param {semantic_value *result}{int *count}{int *nerrs} ... | (* *) + (*) + (*) ./calc.at:1369: mv calc.y.tmp calc.y ./calc.at:1367: $PREPARSER ./calc input stderr: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1369: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1367: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1367: $PREPARSER ./calc input stderr: ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1367: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1368: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS input: | 1 + 2 * 3 + !- ++ ./calc.at:1367: $PREPARSER ./calc input stderr: ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1367: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1367: $PREPARSER ./calc input stderr: memory exhausted ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: memory exhausted ./calc.at:1369: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS ./calc.at:1367: cat stderr input: | (#) + (#) = 2222 ./calc.at:1367: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1367: cat stderr input: | (1 + #) = 1111 ./calc.at:1367: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1367: cat stderr input: | (# + 1) = 1111 ./calc.at:1367: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1367: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1367: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1367: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1367: $PREPARSER ./calc input stderr: error: null divisor ./calc.at:1367: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: error: null divisor ./calc.at:1367: cat stderr 493. calc.at:1367: ok 496. calc.at:1370: testing Calculator parse.error=custom %locations api.prefix={calc} %parse-param {semantic_value *result}{int *count}{int *nerrs} api.push-pull=both api.pure=full ... ./calc.at:1370: mv calc.y.tmp calc.y ./calc.at:1370: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1370: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS stderr: stdout: ./calc.at:1368: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1368: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1368: $PREPARSER ./calc input stderr: ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1368: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1368: $PREPARSER ./calc input stderr: 1.3: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1368: cat stderr input: | 1//2 ./calc.at:1368: $PREPARSER ./calc input stderr: 1.3: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: 1.3: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) stdout: ./calc.at:1369: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1369: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c ./calc.at:1368: cat stderr input: | error input: ./calc.at:1368: $PREPARSER ./calc input | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1369: $PREPARSER ./calc input stderr: stderr: 1.1: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1369: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1368: cat stderr input: input: | 1 2 ./calc.at:1369: $PREPARSER ./calc input | 1 = 2 = 3 ./calc.at:1368: $PREPARSER ./calc input stderr: 1.3: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) stderr: 1.7: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: 1.7: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) 1.3: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1369: cat stderr ./calc.at:1368: cat stderr input: | | +1 ./calc.at:1368: $PREPARSER ./calc input input: stderr: | 1//2 ./calc.at:1369: $PREPARSER ./calc input 2.1: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) stderr: ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 1.3: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) stderr: ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 2.1: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) stderr: 1.3: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1368: cat stderr ./calc.at:1369: cat stderr ./calc.at:1368: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | error ./calc.at:1369: $PREPARSER ./calc input stderr: stderr: 1.1: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) 1.1: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1368: cat stderr 1.1: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1368: $PREPARSER ./calc input ./calc.at:1369: cat stderr stderr: 1.2: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.18: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.23: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.41: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.1-46: error: 4444 != 1 ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 = 2 = 3 stderr: ./calc.at:1369: $PREPARSER ./calc input 1.2: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.18: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.23: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.41: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.1-46: error: 4444 != 1 stderr: 1.7: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1368: cat stderr stderr: 1.7: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) input: | (!!) + (1 2) = 1 ./calc.at:1368: $PREPARSER ./calc input stderr: ./calc.at:1369: cat stderr 1.11: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-16: error: 2222 != 1 ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | | +1 ./calc.at:1369: $PREPARSER ./calc input stderr: stderr: 1.11: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-16: error: 2222 != 1 2.1: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1368: cat stderr stderr: 2.1: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) input: | (- *) + (1 2) = 1 ./calc.at:1368: $PREPARSER ./calc input stderr: 1.4: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.12: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-17: error: 2222 != 1 ./calc.at:1369: cat stderr ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1369: $PREPARSER ./calc /dev/null stderr: stderr: 1.4: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.12: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-17: error: 2222 != 1 1.1: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1368: cat stderr stderr: 1.1: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) input: | (* *) + (*) + (*) ./calc.at:1368: $PREPARSER ./calc input stderr: ./calc.at:1369: cat stderr 1.2: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.10: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.16: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: 1.2: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.10: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.16: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1369: $PREPARSER ./calc input stderr: ./calc.at:1368: cat stderr 1.2: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.18: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.23: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.41: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.1-46: error: 4444 != 1 ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1368: $PREPARSER ./calc input stderr: stderr: 1.2: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.18: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.23: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.41: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.1-46: error: 4444 != 1 ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1368: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1369: cat stderr input: input: | 1 + 2 * 3 + !- ++ ./calc.at:1368: $PREPARSER ./calc input | (!!) + (1 2) = 1 stderr: ./calc.at:1369: $PREPARSER ./calc input ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-16: error: 2222 != 1 stderr: ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-16: error: 2222 != 1 ./calc.at:1368: cat stderr input: ./calc.at:1369: cat stderr | 1 + 2 * 3 + !* ++ ./calc.at:1368: $PREPARSER ./calc input stderr: 1.14: memory exhausted ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (- *) + (1 2) = 1 ./calc.at:1369: $PREPARSER ./calc input stderr: 1.14: memory exhausted stderr: 1.4: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.12: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-17: error: 2222 != 1 ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1368: cat stderr stderr: 1.4: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.12: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-17: error: 2222 != 1 input: | (#) + (#) = 2222 ./calc.at:1368: $PREPARSER ./calc input stderr: ./calc.at:1369: cat stderr 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' | (* *) + (*) + (*) ./calc.at:1369: $PREPARSER ./calc input stderr: 1.2: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.10: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.16: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1368: cat stderr ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.10: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.16: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) input: | (1 + #) = 1111 ./calc.at:1368: $PREPARSER ./calc input stderr: ./calc.at:1369: cat stderr 1.6: syntax error: invalid character: '#' ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' input: | 1 + 2 * 3 + !+ ++ ./calc.at:1369: $PREPARSER ./calc input stderr: ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1369: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1368: cat stderr input: input: | (# + 1) = 1111 ./calc.at:1368: $PREPARSER ./calc input | 1 + 2 * 3 + !- ++ ./calc.at:1369: $PREPARSER ./calc input stderr: stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' stderr: ./calc.at:1368: cat stderr ./calc.at:1369: cat stderr input: input: | (1 + # + 1) = 1111 ./calc.at:1368: $PREPARSER ./calc input | 1 + 2 * 3 + !* ++ ./calc.at:1369: $PREPARSER ./calc input stderr: stderr: 1.14: memory exhausted 1.6: syntax error: invalid character: '#' ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' stderr: 1.14: memory exhausted ./calc.at:1369: cat stderr ./calc.at:1368: cat stderr input: input: | (#) + (#) = 2222 ./calc.at:1369: $PREPARSER ./calc input | (1 + 1) / (1 - 1) ./calc.at:1368: $PREPARSER ./calc input stderr: stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' 1.11-17: error: null divisor ./calc.at:1368: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' stderr: 1.11-17: error: null divisor ./calc.at:1369: cat stderr ./calc.at:1368: cat stderr input: | (1 + #) = 1111 ./calc.at:1369: $PREPARSER ./calc input 494. calc.at:1368: ok stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1369: cat stderr input: | (# + 1) = 1111 ./calc.at:1369: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 497. calc.at:1371: testing Calculator parse.error=custom %locations api.prefix={calc} %parse-param {semantic_value *result}{int *count}{int *nerrs} api.push-pull=both api.pure=full parse.lac=full ... ./calc.at:1369: cat stderr ./calc.at:1371: mv calc.y.tmp calc.y ./calc.at:1371: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y input: | (1 + # + 1) = 1111 ./calc.at:1369: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1369: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1369: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1369: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1369: cat stderr 495. calc.at:1369: ok ./calc.at:1371: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS stderr: stdout: ./calc.at:1370: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1370: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1370: $PREPARSER ./calc input 498. calc.at:1374: testing Calculator %start input exp NUM api.value.type=union ... stderr: ./calc.at:1374: mv calc.y.tmp calc.y ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1374: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y stderr: ./calc.at:1370: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1370: $PREPARSER ./calc input stderr: 1.3: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1370: cat stderr input: | 1//2 ./calc.at:1370: $PREPARSER ./calc input stderr: 1.3: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1370: cat stderr input: | error ./calc.at:1370: $PREPARSER ./calc input stderr: 1.1: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1370: cat stderr input: | 1 = 2 = 3 ./calc.at:1370: $PREPARSER ./calc input stderr: 1.7: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) ./calc.at:1374: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS ./calc.at:1370: cat stderr input: | | +1 ./calc.at:1370: $PREPARSER ./calc input stderr: 2.1: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1370: cat stderr ./calc.at:1370: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1370: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1370: $PREPARSER ./calc input stderr: 1.2: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.18: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.23: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.41: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.1-46: error: 4444 != 1 ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.18: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.23: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.41: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.1-46: error: 4444 != 1 ./calc.at:1370: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1370: $PREPARSER ./calc input stderr: 1.11: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-16: error: 2222 != 1 ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-16: error: 2222 != 1 ./calc.at:1370: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1370: $PREPARSER ./calc input stderr: 1.4: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.12: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-17: error: 2222 != 1 ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.12: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-17: error: 2222 != 1 ./calc.at:1370: cat stderr input: | (* *) + (*) + (*) ./calc.at:1370: $PREPARSER ./calc input stderr: 1.2: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.10: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.16: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.10: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.16: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1370: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1370: $PREPARSER ./calc input stderr: ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1370: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1370: $PREPARSER ./calc input stderr: ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1370: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1370: $PREPARSER ./calc input stderr: 1.14: memory exhausted ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.14: memory exhausted ./calc.at:1370: cat stderr input: | (#) + (#) = 2222 ./calc.at:1370: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1370: cat stderr input: | (1 + #) = 1111 ./calc.at:1370: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1370: cat stderr input: | (# + 1) = 1111 ./calc.at:1370: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1370: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1370: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1370: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1370: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1370: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1370: cat stderr 496. calc.at:1370: ok 499. calc.at:1375: testing Calculator %start input exp NUM api.value.type=union %locations parse.error=detailed ... ./calc.at:1375: mv calc.y.tmp calc.y ./calc.at:1375: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y stderr: stdout: ./calc.at:1374: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1374: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1374: $PREPARSER ./calc input stderr: ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1374: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1374: $PREPARSER ./calc input stderr: syntax error ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1374: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1375: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS ./calc.at:1374: cat stderr input: | 1//2 ./calc.at:1374: $PREPARSER ./calc input stderr: syntax error ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1374: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1374: cat stderr input: | error ./calc.at:1374: $PREPARSER ./calc input stderr: syntax error ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error stderr: stdout: ./calc.at:1371: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1374: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1371: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c ./calc.at:1374: cat stderr input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1371: $PREPARSER ./calc input input: | 1 = 2 = 3 ./calc.at:1374: $PREPARSER ./calc input stderr: stderr: syntax error ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: syntax error ./calc.at:1371: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1371: $PREPARSER ./calc input stderr: 1.3: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1374: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.3: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1374: cat stderr ./calc.at:1371: cat stderr input: input: | 1//2 ./calc.at:1371: $PREPARSER ./calc input | | +1 ./calc.at:1374: $PREPARSER ./calc input stderr: stderr: syntax error ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 1.3: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) stderr: ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr syntax error stderr: 1.3: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1371: cat stderr ./calc.at:1374: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | error ./calc.at:1371: $PREPARSER ./calc input stderr: 1.1: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1374: cat stderr ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1374: $PREPARSER ./calc /dev/null stderr: stderr: 1.1: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) syntax error ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1371: cat stderr input: | 1 = 2 = 3 ./calc.at:1371: $PREPARSER ./calc input stderr: 1.7: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1374: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.7: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1374: cat stderr ./calc.at:1371: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1374: $PREPARSER ./calc input stderr: input: | | +1 syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1371: $PREPARSER ./calc input stderr: stderr: 2.1: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1374: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1371: cat stderr ./calc.at:1371: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1374: cat stderr stderr: input: 1.1: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) | (!!) + (1 2) = 1 ./calc.at:1374: $PREPARSER ./calc input stderr: syntax error error: 2222 != 1 ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1371: cat stderr stderr: syntax error error: 2222 != 1 input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1371: $PREPARSER ./calc input stderr: 1.2: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.18: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.23: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.41: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.1-46: error: 4444 != 1 ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1374: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.2: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.18: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.23: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.41: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.1-46: error: 4444 != 1 ./calc.at:1374: cat stderr ./calc.at:1371: cat stderr input: | (- *) + (1 2) = 1 input: | (!!) + (1 2) = 1 ./calc.at:1374: $PREPARSER ./calc input ./calc.at:1371: $PREPARSER ./calc input stderr: syntax error syntax error error: 2222 != 1 stderr: ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 1.11: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-16: error: 2222 != 1 ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: syntax error syntax error error: 2222 != 1 1.11: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-16: error: 2222 != 1 ./calc.at:1371: cat stderr ./calc.at:1374: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (- *) + (1 2) = 1 ./calc.at:1371: $PREPARSER ./calc input stderr: 1.4: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.12: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-17: error: 2222 != 1 ./calc.at:1374: cat stderr ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.12: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-17: error: 2222 != 1 input: | (* *) + (*) + (*) ./calc.at:1374: $PREPARSER ./calc input stderr: syntax error syntax error syntax error ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1371: cat stderr syntax error syntax error syntax error input: | (* *) + (*) + (*) ./calc.at:1371: $PREPARSER ./calc input stderr: ./calc.at:1374: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 1.2: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.10: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.16: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.10: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.16: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1374: cat stderr ./calc.at:1371: cat stderr input: input: | 1 + 2 * 3 + !+ ++ | 1 + 2 * 3 + !+ ++ ./calc.at:1371: $PREPARSER ./calc input ./calc.at:1374: $PREPARSER ./calc input stderr: stderr: ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1374: $EGREP -c -v 'Return for a new token:|LAC:' stderr stderr: ./calc.at:1371: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1374: $PREPARSER ./calc input stderr: input: ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | 1 + 2 * 3 + !- ++ ./calc.at:1371: $PREPARSER ./calc input stderr: stderr: ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1374: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1371: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1371: $PREPARSER ./calc input ./calc.at:1374: cat stderr stderr: 1.14: memory exhausted ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: | 1 + 2 * 3 + !* ++ ./calc.at:1374: $PREPARSER ./calc input 1.14: memory exhausted stderr: memory exhausted ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: memory exhausted ./calc.at:1371: cat stderr input: | (#) + (#) = 2222 ./calc.at:1371: $PREPARSER ./calc input stderr: ./calc.at:1374: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1374: cat stderr ./calc.at:1371: cat stderr input: | (#) + (#) = 2222 ./calc.at:1374: $PREPARSER ./calc input input: stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' | (1 + #) = 1111 ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1371: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1374: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1371: cat stderr input: | (# + 1) = 1111 ./calc.at:1371: $PREPARSER ./calc input ./calc.at:1374: cat stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (1 + #) = 1111 ./calc.at:1374: $PREPARSER ./calc input stderr: stderr: 1.2: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1371: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1371: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1374: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 1.6: syntax error: invalid character: '#' ./calc.at:1371: cat stderr ./calc.at:1374: cat stderr input: input: | (# + 1) = 1111 ./calc.at:1374: $PREPARSER ./calc input | (1 + 1) / (1 - 1) ./calc.at:1371: $PREPARSER ./calc input stderr: stderr: syntax error: invalid character: '#' 1.11-17: error: null divisor ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1371: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' stderr: 1.11-17: error: null divisor ./calc.at:1374: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1371: cat stderr 497. calc.at:1371: ok ./calc.at:1374: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1374: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1374: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1374: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1374: $PREPARSER ./calc input stderr: error: null divisor ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: error: null divisor 500. calc.at:1387: testing Calculator %glr-parser ... ./calc.at:1387: mv calc.y.tmp calc.y ./calc.at:1387: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1374: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: stdout: ./calc.at:1375: $EGREP '(malloc|free) *\(' calc.[ch] | $EGREP -v 'INFRINGES ON USER NAME SPACE' ./calc.at:1374: cat stderr ./calc.at:1375: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c input: | 123 input: ./calc.at:1374: $PREPARSER ./calc --num input | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1375: $PREPARSER ./calc input stderr: stderr: ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1375: $EGREP -c -v 'Return for a new token:|LAC:' stderr stderr: ./calc.at:1374: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: input: | 1 + 2 * 3 ./calc.at:1374: $PREPARSER ./calc --num input | 1 2 ./calc.at:1375: $PREPARSER ./calc input stderr: syntax error stderr: ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 1.3: syntax error, unexpected number ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: ./calc.at:1387: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS 1.3: syntax error, unexpected number syntax error ./calc.at:1375: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1375: cat stderr ./calc.at:1374: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1//2 ./calc.at:1374: cat stderr ./calc.at:1375: $PREPARSER ./calc input stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 + 2 * 3 stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1374: $PREPARSER ./calc --exp input stderr: ./calc.at:1374: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1375: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: ./calc.at:1374: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1375: cat stderr input: | error ./calc.at:1375: $PREPARSER ./calc input stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 498. calc.at:1374: ok stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1375: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1375: cat stderr input: | 1 = 2 = 3 ./calc.at:1375: $PREPARSER ./calc input stderr: 1.7: syntax error, unexpected '=' ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error, unexpected '=' ./calc.at:1375: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1375: cat stderr input: | | +1 ./calc.at:1375: $PREPARSER ./calc input stderr: 2.1: syntax error, unexpected '+' ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 501. calc.at:1389: testing Calculator %glr-parser %header ... stderr: 2.1: syntax error, unexpected '+' ./calc.at:1389: mv calc.y.tmp calc.y ./calc.at:1389: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1375: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1375: cat stderr ./calc.at:1375: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error, unexpected end of file ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error, unexpected end of file ./calc.at:1375: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1375: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1375: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1375: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1375: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1375: $PREPARSER ./calc input stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1389: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 ./calc.at:1375: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1375: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1375: $PREPARSER ./calc input stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1375: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1375: cat stderr input: | (* *) + (*) + (*) ./calc.at:1375: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1375: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1375: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1375: $PREPARSER ./calc input stderr: ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1375: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1375: $PREPARSER ./calc input stderr: ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1375: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1375: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1375: $PREPARSER ./calc input stderr: 1.14: memory exhausted ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.14: memory exhausted ./calc.at:1375: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1375: cat stderr input: | (#) + (#) = 2222 ./calc.at:1375: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1375: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1375: cat stderr input: | (1 + #) = 1111 ./calc.at:1375: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1375: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1375: cat stderr input: | (# + 1) = 1111 ./calc.at:1375: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1375: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1375: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1375: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1375: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1375: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1375: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1375: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1375: cat stderr input: | 123 ./calc.at:1375: $PREPARSER ./calc --num input stderr: ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1375: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 ./calc.at:1375: $PREPARSER ./calc --num input stderr: 1.3: syntax error, unexpected '+', expecting end of file ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error, unexpected '+', expecting end of file ./calc.at:1375: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1375: cat stderr input: | 1 + 2 * 3 ./calc.at:1375: $PREPARSER ./calc --exp input stderr: ./calc.at:1375: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1375: $EGREP -c -v 'Return for a new token:|LAC:' stderr 499. calc.at:1375: ok 502. calc.at:1390: testing Calculator %glr-parser %locations ... ./calc.at:1390: mv calc.y.tmp calc.y ./calc.at:1390: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1390: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS stderr: stdout: ./calc.at:1387: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1387: $PREPARSER ./calc input stderr: ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 2 ./calc.at:1387: $PREPARSER ./calc input stderr: syntax error ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1387: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1387: cat stderr input: | 1//2 ./calc.at:1387: $PREPARSER ./calc input stderr: syntax error ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1387: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1387: cat stderr input: | error ./calc.at:1387: $PREPARSER ./calc input stderr: syntax error ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1387: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1387: cat stderr input: | 1 = 2 = 3 ./calc.at:1387: $PREPARSER ./calc input stderr: syntax error ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1387: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1387: cat stderr input: | | +1 ./calc.at:1387: $PREPARSER ./calc input stderr: syntax error ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1387: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1387: cat stderr ./calc.at:1387: $PREPARSER ./calc /dev/null stderr: syntax error ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1387: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: stdout: ./calc.at:1389: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h ./calc.at:1387: cat stderr input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 input: ./calc.at:1389: $PREPARSER ./calc input | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1387: $PREPARSER ./calc input stderr: stderr: ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 input: | 1 2 ./calc.at:1389: $PREPARSER ./calc input stderr: syntax error ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1387: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1389: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1387: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1387: $PREPARSER ./calc input ./calc.at:1389: cat stderr stderr: syntax error error: 2222 != 1 ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1//2 stderr: ./calc.at:1389: $PREPARSER ./calc input syntax error error: 2222 != 1 stderr: syntax error ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1387: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1387: cat stderr ./calc.at:1389: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (- *) + (1 2) = 1 ./calc.at:1387: $PREPARSER ./calc input stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1389: cat stderr stderr: syntax error syntax error error: 2222 != 1 input: | error ./calc.at:1389: $PREPARSER ./calc input stderr: syntax error ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1387: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1389: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1387: cat stderr ./calc.at:1389: cat stderr input: | (* *) + (*) + (*) ./calc.at:1387: $PREPARSER ./calc input stderr: input: syntax error syntax error syntax error ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | 1 = 2 = 3 ./calc.at:1389: $PREPARSER ./calc input stderr: stderr: syntax error syntax error syntax error syntax error ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1387: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1389: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1387: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1389: cat stderr ./calc.at:1387: $PREPARSER ./calc input stderr: ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: | | +1 ./calc.at:1389: $PREPARSER ./calc input stderr: syntax error ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1387: $PREPARSER ./calc input syntax error stderr: ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1389: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1389: cat stderr ./calc.at:1387: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1389: $PREPARSER ./calc /dev/null stderr: syntax error ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1387: cat stderr stderr: syntax error input: | 1 + 2 * 3 + !* ++ ./calc.at:1387: $PREPARSER ./calc input stderr: memory exhausted ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1389: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: memory exhausted ./calc.at:1389: cat stderr input: ./calc.at:1387: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1389: $PREPARSER ./calc input stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1387: cat stderr input: | (#) + (#) = 2222 ./calc.at:1387: $PREPARSER ./calc input ./calc.at:1389: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1389: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1389: $PREPARSER ./calc input stderr: syntax error error: 2222 != 1 ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1387: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error error: 2222 != 1 ./calc.at:1389: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1387: cat stderr input: ./calc.at:1389: cat stderr | (1 + #) = 1111 ./calc.at:1387: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (- *) + (1 2) = 1 stderr: syntax error: invalid character: '#' ./calc.at:1389: $PREPARSER ./calc input stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1387: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1389: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1387: cat stderr ./calc.at:1389: cat stderr input: | (# + 1) = 1111 ./calc.at:1387: $PREPARSER ./calc input stderr: input: syntax error: invalid character: '#' | (* *) + (*) + (*) ./calc.at:1389: $PREPARSER ./calc input ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' stderr: syntax error syntax error syntax error stderr: stdout: ./calc.at:1390: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c ./calc.at:1389: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1387: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1390: $PREPARSER ./calc input stderr: ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1389: cat stderr stderr: input: ./calc.at:1387: cat stderr | 1 2 input: ./calc.at:1390: $PREPARSER ./calc input | 1 + 2 * 3 + !+ ++ stderr: 1.3: syntax error ./calc.at:1389: $PREPARSER ./calc input ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: stderr: 1.3: syntax error | (1 + # + 1) = 1111 ./calc.at:1387: $PREPARSER ./calc input ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: syntax error: invalid character: '#' ./calc.at:1390: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 + 2 * 3 + !- ++ ./calc.at:1389: $PREPARSER ./calc input stderr: ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1390: cat stderr ./calc.at:1387: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: input: | 1//2 ./calc.at:1390: $PREPARSER ./calc input ./calc.at:1387: cat stderr ./calc.at:1389: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.3: syntax error ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: | (1 + 1) / (1 - 1) 1.3: syntax error ./calc.at:1387: $PREPARSER ./calc input stderr: ./calc.at:1389: cat stderr error: null divisor ./calc.at:1387: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: ./calc.at:1390: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | 1 + 2 * 3 + !* ++ ./calc.at:1389: $PREPARSER ./calc input stderr: error: null divisor stderr: memory exhausted ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: memory exhausted ./calc.at:1390: cat stderr ./calc.at:1387: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: ./calc.at:1389: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | error ./calc.at:1390: $PREPARSER ./calc input stderr: 1.1: syntax error ./calc.at:1387: cat stderr ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1389: cat stderr 500. calc.at:1387: ok stderr: 1.1: syntax error input: | (#) + (#) = 2222 ./calc.at:1389: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1390: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1390: cat stderr input: | 1 = 2 = 3 ./calc.at:1390: $PREPARSER ./calc input ./calc.at:1389: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.7: syntax error ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error ./calc.at:1389: cat stderr input: | (1 + #) = 1111 ./calc.at:1389: $PREPARSER ./calc input ./calc.at:1390: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error: invalid character: '#' ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1390: cat stderr syntax error: invalid character: '#' input: 503. calc.at:1391: testing Calculator %glr-parser %locations api.location.type={Span} ... ./calc.at:1391: mv calc.y.tmp calc.y | | +1 ./calc.at:1390: $PREPARSER ./calc input ./calc.at:1391: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y stderr: 2.1: syntax error ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1389: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 2.1: syntax error ./calc.at:1390: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1389: cat stderr ./calc.at:1390: cat stderr input: | (# + 1) = 1111 ./calc.at:1389: $PREPARSER ./calc input ./calc.at:1390: $PREPARSER ./calc /dev/null stderr: stderr: 1.1: syntax error syntax error: invalid character: '#' ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error stderr: syntax error: invalid character: '#' ./calc.at:1390: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1389: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1390: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1390: $PREPARSER ./calc input ./calc.at:1389: cat stderr stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (1 + # + 1) = 1111 stderr: ./calc.at:1389: $PREPARSER ./calc input 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 stderr: syntax error: invalid character: '#' ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1390: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error: invalid character: '#' ./calc.at:1389: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1390: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1390: $PREPARSER ./calc input ./calc.at:1389: cat stderr stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: ./calc.at:1391: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS | (1 + 1) / (1 - 1) 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1389: $PREPARSER ./calc input stderr: error: null divisor ./calc.at:1390: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1389: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: error: null divisor ./calc.at:1390: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1390: $PREPARSER ./calc input ./calc.at:1389: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1389: cat stderr 501. calc.at:1389: ok ./calc.at:1390: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1390: cat stderr input: | (* *) + (*) + (*) ./calc.at:1390: $PREPARSER ./calc input stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1390: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1390: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1390: $PREPARSER ./calc input stderr: ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 504. calc.at:1392: testing Calculator %glr-parser %name-prefix "calc" ... ./calc.at:1392: mv calc.y.tmp calc.y ./calc.at:1392: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y input: | 1 + 2 * 3 + !- ++ ./calc.at:1390: $PREPARSER ./calc input stderr: ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1390: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1390: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1390: $PREPARSER ./calc input stderr: 1.14: memory exhausted ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.14: memory exhausted ./calc.at:1390: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1390: cat stderr input: | (#) + (#) = 2222 ./calc.at:1390: $PREPARSER ./calc input ./calc.at:1392: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1390: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1390: cat stderr input: | (1 + #) = 1111 ./calc.at:1390: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1390: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1390: cat stderr input: | (# + 1) = 1111 ./calc.at:1390: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1390: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1390: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1390: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1390: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1390: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1390: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1390: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1390: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1390: cat stderr 502. calc.at:1390: ok 505. calc.at:1393: testing Calculator %glr-parser api.prefix={calc} ... ./calc.at:1393: mv calc.y.tmp calc.y ./calc.at:1393: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1393: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS stderr: stdout: ./calc.at:1392: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1392: $PREPARSER ./calc input stderr: ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 2 ./calc.at:1392: $PREPARSER ./calc input stderr: syntax error ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1392: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1392: cat stderr input: | 1//2 ./calc.at:1392: $PREPARSER ./calc input stderr: syntax error ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1392: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: stdout: ./calc.at:1391: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c ./calc.at:1392: cat stderr input: input: | error ./calc.at:1392: $PREPARSER ./calc input | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1391: $PREPARSER ./calc input stderr: stderr: syntax error ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error stderr: input: | 1 2 ./calc.at:1391: $PREPARSER ./calc input stderr: ./calc.at:1392: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 1.3: syntax error ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1391: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1392: cat stderr input: | 1 = 2 = 3 ./calc.at:1392: $PREPARSER ./calc input stderr: syntax error ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1391: cat stderr stderr: syntax error input: | 1//2 ./calc.at:1391: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1392: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 1.3: syntax error ./calc.at:1391: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1392: cat stderr input: | | +1 ./calc.at:1392: $PREPARSER ./calc input ./calc.at:1391: cat stderr stderr: syntax error ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | error stderr: ./calc.at:1391: $PREPARSER ./calc input syntax error stderr: 1.1: syntax error ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1391: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1392: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1391: cat stderr ./calc.at:1392: cat stderr ./calc.at:1392: $PREPARSER ./calc /dev/null input: | 1 = 2 = 3 stderr: ./calc.at:1391: $PREPARSER ./calc input syntax error ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error stderr: ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr syntax error stderr: 1.7: syntax error ./calc.at:1391: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1392: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1391: cat stderr input: ./calc.at:1392: cat stderr | | +1 ./calc.at:1391: $PREPARSER ./calc input stderr: 2.1: syntax error ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 stderr: ./calc.at:1392: $PREPARSER ./calc input 2.1: syntax error stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1391: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1392: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1391: cat stderr ./calc.at:1391: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1392: cat stderr input: | (!!) + (1 2) = 1 stderr: 1.1: syntax error ./calc.at:1392: $PREPARSER ./calc input stderr: syntax error error: 2222 != 1 ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1391: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 syntax error error: 2222 != 1 ./calc.at:1391: cat stderr ./calc.at:1392: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1391: $PREPARSER ./calc input stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1392: cat stderr stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 input: | (- *) + (1 2) = 1 ./calc.at:1392: $PREPARSER ./calc input stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1391: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1391: cat stderr input: ./calc.at:1392: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | (!!) + (1 2) = 1 ./calc.at:1391: $PREPARSER ./calc input stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1392: cat stderr stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 input: | (* *) + (*) + (*) ./calc.at:1392: $PREPARSER ./calc input stderr: ./calc.at:1391: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 syntax error syntax error syntax error ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error ./calc.at:1391: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1392: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1391: $PREPARSER ./calc input stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1392: cat stderr input: ./calc.at:1391: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | 1 + 2 * 3 + !+ ++ ./calc.at:1392: $PREPARSER ./calc input stderr: ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: ./calc.at:1391: cat stderr | 1 + 2 * 3 + !- ++ ./calc.at:1392: $PREPARSER ./calc input stderr: ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (* *) + (*) + (*) ./calc.at:1391: $PREPARSER ./calc input stderr: stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1392: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1391: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1392: cat stderr ./calc.at:1391: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1392: $PREPARSER ./calc input stderr: memory exhausted ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1391: $PREPARSER ./calc input stderr: stderr: ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr memory exhausted stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1391: $PREPARSER ./calc input stderr: ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1392: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: ./calc.at:1392: cat stderr ./calc.at:1391: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (#) + (#) = 2222 ./calc.at:1392: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1391: cat stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' input: | 1 + 2 * 3 + !* ++ ./calc.at:1391: $PREPARSER ./calc input stderr: 1.14: memory exhausted ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.14: memory exhausted ./calc.at:1392: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1391: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1392: cat stderr input: | (1 + #) = 1111 ./calc.at:1392: $PREPARSER ./calc input ./calc.at:1391: cat stderr stderr: syntax error: invalid character: '#' ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (#) + (#) = 2222 ./calc.at:1391: $PREPARSER ./calc input stderr: stderr: syntax error: invalid character: '#' stdout: ./calc.at:1393: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1392: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' input: ./calc.at:1391: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1393: $PREPARSER ./calc input stderr: ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1392: cat stderr input: | 1 2 ./calc.at:1393: $PREPARSER ./calc input stderr: syntax error ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1391: cat stderr input: | (# + 1) = 1111 ./calc.at:1392: $PREPARSER ./calc input stderr: syntax error input: stderr: syntax error: invalid character: '#' ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | (1 + #) = 1111 ./calc.at:1391: $PREPARSER ./calc input stderr: stderr: 1.6: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1393: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1392: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1393: cat stderr ./calc.at:1391: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: ./calc.at:1392: cat stderr | 1//2 ./calc.at:1393: $PREPARSER ./calc input stderr: syntax error ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1391: cat stderr input: stderr: | (1 + # + 1) = 1111 ./calc.at:1392: $PREPARSER ./calc input syntax error stderr: input: syntax error: invalid character: '#' ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | (# + 1) = 1111 ./calc.at:1391: $PREPARSER ./calc input stderr: stderr: syntax error: invalid character: '#' ./calc.at:1393: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 1.2: syntax error: invalid character: '#' ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1393: cat stderr ./calc.at:1392: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | error ./calc.at:1391: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1393: $PREPARSER ./calc input stderr: syntax error ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1392: cat stderr stderr: syntax error input: ./calc.at:1391: cat stderr | (1 + 1) / (1 - 1) ./calc.at:1392: $PREPARSER ./calc input stderr: error: null divisor ./calc.at:1392: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (1 + # + 1) = 1111 ./calc.at:1391: $PREPARSER ./calc input stderr: error: null divisor stderr: ./calc.at:1393: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 1.6: syntax error: invalid character: '#' ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1393: cat stderr ./calc.at:1392: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1391: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 = 2 = 3 ./calc.at:1393: $PREPARSER ./calc input stderr: syntax error ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1392: cat stderr stderr: syntax error ./calc.at:1391: cat stderr 504. calc.at:1392: ok ./calc.at:1393: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (1 + 1) / (1 - 1) ./calc.at:1391: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1391: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1393: cat stderr stderr: 1.11-17: error: null divisor input: | | +1 ./calc.at:1393: $PREPARSER ./calc input stderr: syntax error ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1391: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error ./calc.at:1391: cat stderr 503. calc.at:1391: ./calc.at:1393: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ok ./calc.at:1393: cat stderr ./calc.at:1393: $PREPARSER ./calc /dev/null stderr: syntax error ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error 506. calc.at:1394: testing Calculator %glr-parser %verbose ... ./calc.at:1394: mv calc.y.tmp calc.y ./calc.at:1393: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1394: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1393: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1393: $PREPARSER ./calc input stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1393: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1393: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1393: $PREPARSER ./calc input stderr: syntax error error: 2222 != 1 ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error error: 2222 != 1 507. calc.at:1395: testing Calculator %glr-parser parse.error=verbose ... ./calc.at:1395: mv calc.y.tmp calc.y ./calc.at:1395: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1393: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1394: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS ./calc.at:1393: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1393: $PREPARSER ./calc input stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1393: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1393: cat stderr input: | (* *) + (*) + (*) ./calc.at:1393: $PREPARSER ./calc input stderr: syntax error syntax error syntax error ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error ./calc.at:1393: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1395: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS ./calc.at:1393: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1393: $PREPARSER ./calc input stderr: ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1393: $PREPARSER ./calc input stderr: ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1393: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1393: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1393: $PREPARSER ./calc input stderr: memory exhausted ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: memory exhausted ./calc.at:1393: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1393: cat stderr input: | (#) + (#) = 2222 ./calc.at:1393: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1393: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1393: cat stderr input: | (1 + #) = 1111 ./calc.at:1393: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1393: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1393: cat stderr input: | (# + 1) = 1111 ./calc.at:1393: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1393: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1393: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1393: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1393: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1393: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1393: $PREPARSER ./calc input stderr: error: null divisor ./calc.at:1393: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: error: null divisor ./calc.at:1393: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1393: cat stderr 505. calc.at:1393: ok 508. calc.at:1397: testing Calculator %glr-parser api.pure %locations ... ./calc.at:1397: mv calc.y.tmp calc.y ./calc.at:1397: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1397: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS stderr: stdout: ./calc.at:1394: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1394: $PREPARSER ./calc input stderr: ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 2 ./calc.at:1394: $PREPARSER ./calc input stderr: syntax error ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1394: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1394: cat stderr input: | 1//2 ./calc.at:1394: $PREPARSER ./calc input stderr: syntax error ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1394: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1394: cat stderr input: | error ./calc.at:1394: $PREPARSER ./calc input stderr: syntax error ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1394: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1394: cat stderr input: | 1 = 2 = 3 ./calc.at:1394: $PREPARSER ./calc input stderr: syntax error ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1394: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1394: cat stderr input: | | +1 ./calc.at:1394: $PREPARSER ./calc input stderr: syntax error ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1394: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1394: cat stderr ./calc.at:1394: $PREPARSER ./calc /dev/null stderr: syntax error ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1394: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1394: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1394: $PREPARSER ./calc input stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1394: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: stdout: ./calc.at:1394: cat stderr ./calc.at:1395: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c input: | (!!) + (1 2) = 1 ./calc.at:1394: $PREPARSER ./calc input input: stderr: syntax error error: 2222 != 1 | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1395: $PREPARSER ./calc input stderr: stderr: ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr syntax error error: 2222 != 1 stderr: input: | 1 2 ./calc.at:1395: $PREPARSER ./calc input stderr: ./calc.at:1394: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 syntax error, unexpected number ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected number ./calc.at:1394: cat stderr ./calc.at:1395: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (- *) + (1 2) = 1 ./calc.at:1394: $PREPARSER ./calc input stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1395: cat stderr input: ./calc.at:1394: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | 1//2 ./calc.at:1395: $PREPARSER ./calc input stderr: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1394: cat stderr input: | (* *) + (*) + (*) stderr: ./calc.at:1394: $PREPARSER ./calc input syntax error, unexpected '/', expecting number or '-' or '(' or '!' stderr: syntax error syntax error syntax error ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error ./calc.at:1395: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1395: cat stderr ./calc.at:1394: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | error ./calc.at:1395: $PREPARSER ./calc input stderr: syntax error, unexpected invalid token ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected invalid token ./calc.at:1394: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1394: $PREPARSER ./calc input stderr: ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1395: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: input: ./calc.at:1395: cat stderr | 1 + 2 * 3 + !- ++ ./calc.at:1394: $PREPARSER ./calc input stderr: ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 = 2 = 3 ./calc.at:1395: $PREPARSER ./calc input stderr: stderr: syntax error, unexpected '=' ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '=' ./calc.at:1394: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1395: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1394: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1394: $PREPARSER ./calc input ./calc.at:1395: cat stderr stderr: memory exhausted ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: | | +1 ./calc.at:1395: $PREPARSER ./calc input memory exhausted stderr: syntax error, unexpected '+' ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '+' ./calc.at:1394: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1395: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1394: cat stderr ./calc.at:1395: cat stderr input: ./calc.at:1395: $PREPARSER ./calc /dev/null | (#) + (#) = 2222 ./calc.at:1394: $PREPARSER ./calc input stderr: syntax error, unexpected end of input ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected end of input stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1395: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1395: cat stderr ./calc.at:1394: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1395: $PREPARSER ./calc input stderr: syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' error: 4444 != 1 ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' error: 4444 != 1 ./calc.at:1394: cat stderr input: | (1 + #) = 1111 ./calc.at:1394: $PREPARSER ./calc input stderr: ./calc.at:1395: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 syntax error: invalid character: '#' ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1395: cat stderr stderr: syntax error: invalid character: '#' input: | (!!) + (1 2) = 1 ./calc.at:1395: $PREPARSER ./calc input stderr: syntax error, unexpected number error: 2222 != 1 ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1394: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error, unexpected number error: 2222 != 1 ./calc.at:1394: cat stderr input: | (# + 1) = 1111 ./calc.at:1394: $PREPARSER ./calc input stderr: ./calc.at:1395: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 syntax error: invalid character: '#' ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1395: cat stderr stderr: syntax error: invalid character: '#' input: | (- *) + (1 2) = 1 ./calc.at:1395: $PREPARSER ./calc input stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected number error: 2222 != 1 ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected number error: 2222 != 1 ./calc.at:1394: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1395: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1394: cat stderr ./calc.at:1395: cat stderr input: input: | (1 + # + 1) = 1111 ./calc.at:1394: $PREPARSER ./calc input | (* *) + (*) + (*) ./calc.at:1395: $PREPARSER ./calc input stderr: stderr: syntax error: invalid character: '#' ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: syntax error: invalid character: '#' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1395: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1394: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1395: cat stderr ./calc.at:1394: cat stderr input: | 1 + 2 * 3 + !+ ++ input: ./calc.at:1395: $PREPARSER ./calc input | (1 + 1) / (1 - 1) ./calc.at:1394: $PREPARSER ./calc input stderr: stderr: ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr error: null divisor ./calc.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: error: null divisor stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1395: $PREPARSER ./calc input stderr: ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1394: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: ./calc.at:1394: cat stderr ./calc.at:1395: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 506. calc.at:1394: ok ./calc.at:1395: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1395: $PREPARSER ./calc input stderr: memory exhausted ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: memory exhausted stdout: ./calc.at:1397: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c input: ./calc.at:1395: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1397: $PREPARSER ./calc input stderr: ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1395: cat stderr stderr: input: | 1 2 input: ./calc.at:1397: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | (#) + (#) = 2222 ./calc.at:1395: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' stderr: 1.3: syntax error ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1397: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 509. calc.at:1398: testing Calculator %glr-parser parse.error=verbose %locations ... ./calc.at:1398: mv calc.y.tmp calc.y ./calc.at:1398: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1395: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1397: cat stderr ./calc.at:1395: cat stderr input: | 1//2 ./calc.at:1397: $PREPARSER ./calc input stderr: input: 1.3: syntax error ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | (1 + #) = 1111 ./calc.at:1395: $PREPARSER ./calc input stderr: stderr: 1.3: syntax error syntax error: invalid character: '#' ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1397: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1397: cat stderr ./calc.at:1395: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | error ./calc.at:1395: cat stderr ./calc.at:1397: $PREPARSER ./calc input stderr: 1.1: syntax error ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (# + 1) = 1111 ./calc.at:1395: $PREPARSER ./calc input stderr: 1.1: syntax error stderr: syntax error: invalid character: '#' ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1397: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1395: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1397: cat stderr input: ./calc.at:1395: cat stderr | 1 = 2 = 3 ./calc.at:1397: $PREPARSER ./calc input stderr: 1.7: syntax error ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (1 + # + 1) = 1111 ./calc.at:1395: $PREPARSER ./calc input stderr: stderr: 1.7: syntax error syntax error: invalid character: '#' ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1398: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS stderr: ./calc.at:1397: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 syntax error: invalid character: '#' ./calc.at:1397: cat stderr ./calc.at:1395: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | | +1 ./calc.at:1397: $PREPARSER ./calc input ./calc.at:1395: cat stderr stderr: 2.1: syntax error ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (1 + 1) / (1 - 1) ./calc.at:1395: $PREPARSER ./calc input stderr: error: null divisor ./calc.at:1395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: 2.1: syntax error error: null divisor ./calc.at:1397: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1395: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1397: cat stderr ./calc.at:1395: cat stderr ./calc.at:1397: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 507. calc.at:1395: ok stderr: 1.1: syntax error ./calc.at:1397: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1397: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1397: $PREPARSER ./calc input stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1397: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 510. calc.at:1400: testing Calculator %glr-parser parse.error=custom %locations %header %name-prefix "calc" %verbose ... ./calc.at:1400: mv calc.y.tmp calc.y ./calc.at:1397: cat stderr ./calc.at:1400: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y input: | (!!) + (1 2) = 1 ./calc.at:1397: $PREPARSER ./calc input stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1397: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1397: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1397: $PREPARSER ./calc input stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1397: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1397: cat stderr input: | (* *) + (*) + (*) ./calc.at:1397: $PREPARSER ./calc input stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1397: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1397: cat stderr ./calc.at:1400: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS input: | 1 + 2 * 3 + !+ ++ ./calc.at:1397: $PREPARSER ./calc input stderr: ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1397: $PREPARSER ./calc input stderr: ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1397: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1397: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1397: $PREPARSER ./calc input stderr: 1.14: memory exhausted ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.14: memory exhausted ./calc.at:1397: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1397: cat stderr input: | (#) + (#) = 2222 ./calc.at:1397: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1397: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1397: cat stderr input: | (1 + #) = 1111 ./calc.at:1397: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1397: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1397: cat stderr input: | (# + 1) = 1111 ./calc.at:1397: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1397: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1397: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1397: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1397: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1397: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1397: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1397: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1397: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1397: cat stderr 508. calc.at:1397: ok 511. calc.at:1401: testing Calculator %glr-parser parse.error=custom %locations %header %name-prefix "calc" %verbose api.pure ... ./calc.at:1401: mv calc.y.tmp calc.y ./calc.at:1401: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1401: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS stderr: stdout: ./calc.at:1398: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1398: $PREPARSER ./calc input stderr: ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 2 ./calc.at:1398: $PREPARSER ./calc input stderr: 1.3: syntax error, unexpected number ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error, unexpected number ./calc.at:1398: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1398: cat stderr input: | 1//2 ./calc.at:1398: $PREPARSER ./calc input stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1398: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1398: cat stderr input: | error ./calc.at:1398: $PREPARSER ./calc input stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1398: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1398: cat stderr input: | 1 = 2 = 3 ./calc.at:1398: $PREPARSER ./calc input stderr: 1.7: syntax error, unexpected '=' ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error, unexpected '=' ./calc.at:1398: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1398: cat stderr input: | | +1 ./calc.at:1398: $PREPARSER ./calc input stderr: 2.1: syntax error, unexpected '+' ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error, unexpected '+' ./calc.at:1398: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1398: cat stderr ./calc.at:1398: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error, unexpected end of input ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error, unexpected end of input ./calc.at:1398: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1398: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1398: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1398: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1398: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1398: $PREPARSER ./calc input stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 ./calc.at:1398: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1398: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1398: $PREPARSER ./calc input stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1398: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1398: cat stderr input: | (* *) + (*) + (*) ./calc.at:1398: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1398: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1398: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1398: $PREPARSER ./calc input stderr: ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1398: $PREPARSER ./calc input stderr: ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1398: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1398: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1398: $PREPARSER ./calc input stderr: 1.14: memory exhausted ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.14: memory exhausted ./calc.at:1398: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1398: cat stderr input: | (#) + (#) = 2222 ./calc.at:1398: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1398: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1398: cat stderr input: | (1 + #) = 1111 ./calc.at:1398: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1398: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1398: cat stderr input: | (# + 1) = 1111 ./calc.at:1398: $PREPARSER ./calc input stderr: stderr: stdout: 1.2: syntax error: invalid character: '#' ./calc.at:1400: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1400: $PREPARSER ./calc input ./calc.at:1398: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1398: cat stderr stderr: input: input: | 1 2 ./calc.at:1400: $PREPARSER ./calc input | (1 + # + 1) = 1111 ./calc.at:1398: $PREPARSER ./calc input stderr: stderr: 1.6: syntax error: invalid character: '#' 1.3: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' stderr: 1.3: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1398: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1400: cat stderr ./calc.at:1398: cat stderr input: | 1//2 input: ./calc.at:1400: $PREPARSER ./calc input | (1 + 1) / (1 - 1) ./calc.at:1398: $PREPARSER ./calc input stderr: 1.3: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1398: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: 1.11-17: error: null divisor 1.3: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1400: cat stderr ./calc.at:1398: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | error ./calc.at:1400: $PREPARSER ./calc input ./calc.at:1398: cat stderr stderr: 1.1: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) 509. calc.at:1398: ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ok stderr: 1.1: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1400: cat stderr input: | 1 = 2 = 3 ./calc.at:1400: $PREPARSER ./calc input stderr: 1.7: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) ./calc.at:1400: cat stderr input: | | +1 ./calc.at:1400: $PREPARSER ./calc input stderr: 2.1: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1400: cat stderr ./calc.at:1400: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) 512. calc.at:1402: testing Calculator %glr-parser parse.error=detailed %locations %header %name-prefix "calc" %verbose ... ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1402: mv calc.y.tmp calc.y stderr: 1.1: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1400: cat stderr ./calc.at:1402: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1400: $PREPARSER ./calc input stderr: 1.2: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.18: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.23: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.41: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.1-46: error: 4444 != 1 ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.18: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.23: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.41: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.1-46: error: 4444 != 1 ./calc.at:1400: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1400: $PREPARSER ./calc input stderr: 1.11: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-16: error: 2222 != 1 ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-16: error: 2222 != 1 ./calc.at:1400: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1400: $PREPARSER ./calc input stderr: 1.4: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.12: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-17: error: 2222 != 1 ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.12: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-17: error: 2222 != 1 ./calc.at:1400: cat stderr input: | (* *) + (*) + (*) ./calc.at:1400: $PREPARSER ./calc input ./calc.at:1402: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS stderr: 1.2: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.10: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.16: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.10: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.16: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1400: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1400: $PREPARSER ./calc input stderr: ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1400: $PREPARSER ./calc input stderr: ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1400: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1400: $PREPARSER ./calc input stderr: stderr: 1.14: memory exhausted stdout: ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1401: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h stderr: 1.14: memory exhausted input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1401: $PREPARSER ./calc input ./calc.at:1400: cat stderr stderr: ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (#) + (#) = 2222 ./calc.at:1400: $PREPARSER ./calc input stderr: stderr: input: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' | 1 2 ./calc.at:1401: $PREPARSER ./calc input ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' stderr: 1.3: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1400: cat stderr input: | (1 + #) = 1111 ./calc.at:1400: $PREPARSER ./calc input ./calc.at:1401: cat stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1//2 ./calc.at:1401: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' stderr: 1.3: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1400: cat stderr input: ./calc.at:1401: cat stderr | (# + 1) = 1111 ./calc.at:1400: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' input: ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | error ./calc.at:1401: $PREPARSER ./calc input stderr: stderr: 1.1: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) 1.2: syntax error: invalid character: '#' ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1400: cat stderr ./calc.at:1401: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1400: $PREPARSER ./calc input input: | 1 = 2 = 3 ./calc.at:1401: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: 1.6: syntax error: invalid character: '#' 1.7: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) ./calc.at:1400: cat stderr ./calc.at:1401: cat stderr input: input: | | +1 ./calc.at:1401: $PREPARSER ./calc input | (1 + 1) / (1 - 1) ./calc.at:1400: $PREPARSER ./calc input stderr: stderr: 2.1: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) 1.11-17: error: null divisor ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1400: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) stderr: 1.11-17: error: null divisor ./calc.at:1401: cat stderr ./calc.at:1400: cat stderr ./calc.at:1401: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) 510. calc.at:1400: ok ./calc.at:1401: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1401: $PREPARSER ./calc input stderr: 1.2: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.18: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.23: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.41: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.1-46: error: 4444 != 1 ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.18: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.23: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.41: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.1-46: error: 4444 != 1 ./calc.at:1401: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1401: $PREPARSER ./calc input stderr: 1.11: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-16: error: 2222 != 1 ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-16: error: 2222 != 1 513. calc.at:1403: testing Calculator %glr-parser parse.error=verbose %locations %header %name-prefix "calc" %verbose ... ./calc.at:1403: mv calc.y.tmp calc.y ./calc.at:1403: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1401: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1401: $PREPARSER ./calc input stderr: 1.4: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.12: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-17: error: 2222 != 1 ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.12: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-17: error: 2222 != 1 ./calc.at:1401: cat stderr input: | (* *) + (*) + (*) ./calc.at:1401: $PREPARSER ./calc input stderr: 1.2: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.10: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.16: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.10: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.16: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1401: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1401: $PREPARSER ./calc input stderr: ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1401: $PREPARSER ./calc input stderr: ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1401: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1401: $PREPARSER ./calc input stderr: 1.14: memory exhausted ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.14: memory exhausted ./calc.at:1401: cat stderr ./calc.at:1403: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS input: | (#) + (#) = 2222 ./calc.at:1401: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1401: cat stderr input: | (1 + #) = 1111 ./calc.at:1401: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1401: cat stderr input: | (# + 1) = 1111 ./calc.at:1401: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1401: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1401: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1401: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1401: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1401: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1401: cat stderr 511. calc.at:1401: ok 514. calc.at:1405: testing Calculator %glr-parser parse.error=custom %locations %header %name-prefix "calc" %verbose ... ./calc.at:1405: mv calc.y.tmp calc.y ./calc.at:1405: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1405: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS stderr: stdout: ./calc.at:1402: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1402: $PREPARSER ./calc input stderr: ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 2 ./calc.at:1402: $PREPARSER ./calc input stderr: 1.3: syntax error, unexpected number ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error, unexpected number ./calc.at:1402: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1402: cat stderr input: | 1//2 ./calc.at:1402: $PREPARSER ./calc input stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1402: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1402: cat stderr input: | error ./calc.at:1402: $PREPARSER ./calc input stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1402: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1402: cat stderr input: | 1 = 2 = 3 ./calc.at:1402: $PREPARSER ./calc input stderr: 1.7: syntax error, unexpected '=' ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error, unexpected '=' ./calc.at:1402: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1402: cat stderr input: | | +1 ./calc.at:1402: $PREPARSER ./calc input stderr: 2.1: syntax error, unexpected '+' ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error, unexpected '+' ./calc.at:1402: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1402: cat stderr ./calc.at:1402: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error, unexpected end of file ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error, unexpected end of file ./calc.at:1402: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1402: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1402: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1402: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1402: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1402: $PREPARSER ./calc input stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 ./calc.at:1402: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1402: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1402: $PREPARSER ./calc input stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1402: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1402: cat stderr input: | (* *) + (*) + (*) ./calc.at:1402: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1402: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1402: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1402: $PREPARSER ./calc input stderr: ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1402: $PREPARSER ./calc input stderr: ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1402: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1402: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1402: $PREPARSER ./calc input stderr: 1.14: memory exhausted ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.14: memory exhausted ./calc.at:1402: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1402: cat stderr input: | (#) + (#) = 2222 ./calc.at:1402: $PREPARSER ./calc input stderr: stderr: stdout: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1403: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h stderr: input: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1403: $PREPARSER ./calc input stderr: ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 2 ./calc.at:1403: $PREPARSER ./calc input ./calc.at:1402: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.3: syntax error, unexpected number ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1402: cat stderr stderr: 1.3: syntax error, unexpected number input: | (1 + #) = 1111 ./calc.at:1402: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1403: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1402: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1403: cat stderr ./calc.at:1402: cat stderr input: input: | 1//2 ./calc.at:1403: $PREPARSER ./calc input | (# + 1) = 1111 ./calc.at:1402: $PREPARSER ./calc input stderr: stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 1.2: syntax error: invalid character: '#' ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' stderr: ./calc.at:1403: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 1.2: syntax error: invalid character: '#' ./calc.at:1403: cat stderr input: ./calc.at:1402: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | error ./calc.at:1403: $PREPARSER ./calc input stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1402: cat stderr stderr: 1.1: syntax error, unexpected invalid token input: | (1 + # + 1) = 1111 ./calc.at:1402: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1403: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1403: cat stderr input: | 1 = 2 = 3 ./calc.at:1402: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1403: $PREPARSER ./calc input stderr: 1.7: syntax error, unexpected '=' ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1402: cat stderr stderr: 1.7: syntax error, unexpected '=' input: | (1 + 1) / (1 - 1) ./calc.at:1402: $PREPARSER ./calc input ./calc.at:1403: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.11-17: error: null divisor ./calc.at:1402: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1403: cat stderr ./calc.at:1402: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | | +1 ./calc.at:1403: $PREPARSER ./calc input stderr: ./calc.at:1402: cat stderr 2.1: syntax error, unexpected '+' ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 512. calc.at:1402: ok stderr: 2.1: syntax error, unexpected '+' ./calc.at:1403: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1403: cat stderr ./calc.at:1403: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error, unexpected end of input ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error, unexpected end of input ./calc.at:1403: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1403: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1403: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 515. calc.at:1407: testing Calculator %glr-parser %debug ... ./calc.at:1407: mv calc.y.tmp calc.y ./calc.at:1407: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1403: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1403: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1403: $PREPARSER ./calc input stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 ./calc.at:1403: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1403: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1403: $PREPARSER ./calc input stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1403: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1403: cat stderr input: | (* *) + (*) + (*) ./calc.at:1403: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1403: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1403: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1403: $PREPARSER ./calc input stderr: ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1407: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c $LIBS input: | 1 + 2 * 3 + !- ++ ./calc.at:1403: $PREPARSER ./calc input stderr: ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1403: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1403: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1403: $PREPARSER ./calc input stderr: 1.14: memory exhausted ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.14: memory exhausted ./calc.at:1403: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1403: cat stderr input: | (#) + (#) = 2222 ./calc.at:1403: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' stdout: ./calc.at:1405: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h ./calc.at:1403: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1405: $PREPARSER ./calc input ./calc.at:1403: cat stderr stderr: ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | (1 + #) = 1111 ./calc.at:1403: $PREPARSER ./calc input input: | 1 2 ./calc.at:1405: $PREPARSER ./calc input stderr: stderr: 1.6: syntax error: invalid character: '#' 1.3: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1403: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.3: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1403: cat stderr ./calc.at:1405: cat stderr input: | (# + 1) = 1111 ./calc.at:1403: $PREPARSER ./calc input input: stderr: | 1//2 ./calc.at:1405: $PREPARSER ./calc input 1.2: syntax error: invalid character: '#' stderr: 1.3: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1405: cat stderr input: ./calc.at:1403: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | error ./calc.at:1405: $PREPARSER ./calc input ./calc.at:1403: cat stderr stderr: 1.1: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: | (1 + # + 1) = 1111 ./calc.at:1403: $PREPARSER ./calc input stderr: 1.1: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) 1.6: syntax error: invalid character: '#' ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1405: cat stderr ./calc.at:1403: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 = 2 = 3 ./calc.at:1405: $PREPARSER ./calc input ./calc.at:1403: cat stderr stderr: 1.7: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: | (1 + 1) / (1 - 1) ./calc.at:1403: $PREPARSER ./calc input 1.7: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) stderr: 1.11-17: error: null divisor ./calc.at:1403: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1405: cat stderr ./calc.at:1403: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | | +1 ./calc.at:1405: $PREPARSER ./calc input ./calc.at:1403: cat stderr stderr: 2.1: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 513. calc.at:1403: ok stderr: 2.1: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1405: cat stderr ./calc.at:1405: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1405: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1405: $PREPARSER ./calc input stderr: 1.2: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.18: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.23: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.41: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.1-46: error: 4444 != 1 ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.18: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.23: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.41: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.1-46: error: 4444 != 1 ./calc.at:1405: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1405: $PREPARSER ./calc input stderr: 1.11: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-16: error: 2222 != 1 ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 516. calc.at:1408: testing Calculator %glr-parser parse.error=verbose %debug %locations %header %name-prefix "calc" %verbose ... ./calc.at:1408: mv calc.y.tmp calc.y 1.11: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-16: error: 2222 != 1 ./calc.at:1408: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1405: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1405: $PREPARSER ./calc input stderr: 1.4: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.12: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-17: error: 2222 != 1 ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.12: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-17: error: 2222 != 1 ./calc.at:1405: cat stderr input: | (* *) + (*) + (*) ./calc.at:1405: $PREPARSER ./calc input stderr: 1.2: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.10: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.16: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.10: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.16: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1405: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1405: $PREPARSER ./calc input stderr: ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1405: $PREPARSER ./calc input stderr: ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1408: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS ./calc.at:1405: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1405: $PREPARSER ./calc input stderr: 1.14: memory exhausted ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.14: memory exhausted ./calc.at:1405: cat stderr input: | (#) + (#) = 2222 ./calc.at:1405: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1405: cat stderr input: | (1 + #) = 1111 ./calc.at:1405: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1405: cat stderr input: | (# + 1) = 1111 ./calc.at:1405: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1405: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1405: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1405: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1405: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1405: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1405: cat stderr 514. calc.at:1405: ok 517. calc.at:1409: testing Calculator %glr-parser parse.error=verbose %debug %locations %header api.prefix={calc} api.token.prefix={TOK_} %verbose ... ./calc.at:1409: mv calc.y.tmp calc.y ./calc.at:1409: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1409: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS stderr: stdout: ./calc.at:1407: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1407: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Next token is token '=' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 7) Shifting token "number" (1.1: 7) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 7) -> $$ = nterm exp (1.1: 7) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 7) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 7) -> $$ = nterm exp (1.1: 7) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 7) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 10 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: -3) Entering state 31 Next token is token '=' (1.1: ) Reducing stack 0 by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: -3) -> $$ = nterm exp (1.1: -6) Entering state 30 Next token is token '=' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: -6) -> $$ = nterm exp (1.1: -5) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token "number" (1.1: 5) Shifting token "number" (1.1: 5) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 5) -> $$ = nterm exp (1.1: 5) Entering state 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 5) -> $$ = nterm exp (1.1: -5) Entering state 28 Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: -5) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -5) -> $$ = nterm exp (1.1: -5) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: -5) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Reducing stack 0 by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 12 (line 117): $1 = nterm exp (1.1: 1) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 1) Entering state 10 Next token is token '=' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 28 Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: -1) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: -1) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: -1) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Reading a token Next token is token ')' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Reducing stack 0 by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: -1) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: -1) Entering state 8 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 12 (line 117): $1 = nterm exp (1.1: -1) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 1) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 1) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 1) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Reducing stack 0 by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 10 Next token is token '=' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: 1) Entering state 10 Next token is token '=' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 28 Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: -1) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: -1) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: -1) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Reducing stack 0 by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 29 Reading a token Next token is token '-' (1.1: ) Reducing stack 0 by rule 8 (line 99): $1 = nterm exp (1.1: 1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: -1) Entering state 8 Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 29 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 8 (line 99): $1 = nterm exp (1.1: -1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: -4) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token "number" (1.1: 4) Shifting token "number" (1.1: 4) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 4) -> $$ = nterm exp (1.1: 4) Entering state 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 4) -> $$ = nterm exp (1.1: -4) Entering state 28 Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: -4) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -4) -> $$ = nterm exp (1.1: -4) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: -4) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 12 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 29 Reading a token Next token is token ')' (1.1: ) Reducing stack 0 by rule 8 (line 99): $1 = nterm exp (1.1: 2) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: -1) Entering state 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Reducing stack 0 by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: -1) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: -1) Entering state 29 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 8 (line 99): $1 = nterm exp (1.1: 1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: 2) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 2) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 2) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Reducing stack 0 by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 8 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 33 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 12 (line 117): $1 = nterm exp (1.1: 2) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 8) Entering state 33 Next token is token '=' (1.1: ) Reducing stack 0 by rule 12 (line 117): $1 = nterm exp (1.1: 2) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 8) -> $$ = nterm exp (1.1: 256) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 256) Shifting token "number" (1.1: 256) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 256) -> $$ = nterm exp (1.1: 256) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 256) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 256) -> $$ = nterm exp (1.1: 256) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 256) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 12 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Reading a token Next token is token ')' (1.1: ) Reducing stack 0 by rule 12 (line 117): $1 = nterm exp (1.1: 2) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 4) Entering state 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Reducing stack 0 by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: 4) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 4) Entering state 8 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 33 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 12 (line 117): $1 = nterm exp (1.1: 4) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 64) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 64) Shifting token "number" (1.1: 64) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 64) -> $$ = nterm exp (1.1: 64) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 64) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 64) -> $$ = nterm exp (1.1: 64) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 64) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Next token is token '=' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 7) Shifting token "number" (1.1: 7) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 7) -> $$ = nterm exp (1.1: 7) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 7) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 7) -> $$ = nterm exp (1.1: 7) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 7) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 10 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: -3) Entering state 31 Next token is token '=' (1.1: ) Reducing stack 0 by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: -3) -> $$ = nterm exp (1.1: -6) Entering state 30 Next token is token '=' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: -6) -> $$ = nterm exp (1.1: -5) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token "number" (1.1: 5) Shifting token "number" (1.1: 5) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 5) -> $$ = nterm exp (1.1: 5) Entering state 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 5) -> $$ = nterm exp (1.1: -5) Entering state 28 Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: -5) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -5) -> $$ = nterm exp (1.1: -5) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: -5) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Reducing stack 0 by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 12 (line 117): $1 = nterm exp (1.1: 1) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 1) Entering state 10 Next token is token '=' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 28 Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: -1) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: -1) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: -1) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Reading a token Next token is token ')' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Reducing stack 0 by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: -1) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: -1) Entering state 8 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 12 (line 117): $1 = nterm exp (1.1: -1) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 1) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 1) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 1) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Reducing stack 0 by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 10 Next token is token '=' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: 1) Entering state 10 Next token is token '=' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: -1) Entering state 28 Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: -1) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: -1) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: -1) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Reducing stack 0 by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 29 Reading a token Next token is token '-' (1.1: ) Reducing stack 0 by rule 8 (line 99): $1 = nterm exp (1.1: 1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: -1) Entering state 8 Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 29 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 8 (line 99): $1 = nterm exp (1.1: -1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: -4) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token "number" (1.1: 4) Shifting token "number" (1.1: 4) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 4) -> $$ = nterm exp (1.1: 4) Entering state 10 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 11 (line 116): $1 = token '-' (1.1: ) $2 = nterm exp (1.1: 4) -> $$ = nterm exp (1.1: -4) Entering state 28 Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: -4) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: -4) -> $$ = nterm exp (1.1: -4) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: -4) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 12 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 29 Reading a token Next token is token ')' (1.1: ) Reducing stack 0 by rule 8 (line 99): $1 = nterm exp (1.1: 2) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: -1) Entering state 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Reducing stack 0 by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: -1) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: -1) Entering state 29 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 8 (line 99): $1 = nterm exp (1.1: 1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: -1) -> $$ = nterm exp (1.1: 2) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 2) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 2) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Reducing stack 0 by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 8 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 33 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 12 (line 117): $1 = nterm exp (1.1: 2) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 8) Entering state 33 Next token is token '=' (1.1: ) Reducing stack 0 by rule 12 (line 117): $1 = nterm exp (1.1: 2) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 8) -> $$ = nterm exp (1.1: 256) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 256) Shifting token "number" (1.1: 256) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 256) -> $$ = nterm exp (1.1: 256) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 256) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 256) -> $$ = nterm exp (1.1: 256) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 256) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 12 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 33 Reading a token Next token is token ')' (1.1: ) Reducing stack 0 by rule 12 (line 117): $1 = nterm exp (1.1: 2) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 4) Entering state 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Reducing stack 0 by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: 4) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 4) Entering state 8 Reading a token Next token is token '^' (1.1: ) Shifting token '^' (1.1: ) Entering state 24 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 33 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 12 (line 117): $1 = nterm exp (1.1: 4) $2 = token '^' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 64) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 64) Shifting token "number" (1.1: 64) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 64) -> $$ = nterm exp (1.1: 64) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 64) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 64) -> $$ = nterm exp (1.1: 64) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 64) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 18 Reducing stack 0 by rule 2 (line 72): $1 = nterm input (1.1: ) $2 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) input: | 1 2 ./calc.at:1407: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.1: 2) syntax error Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.1: 2) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.1: 2) syntax error Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.1: 2) ./calc.at:1407: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1407: cat stderr input: | 1//2 ./calc.at:1407: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.1: ) Shifting token '/' (1.1: ) Entering state 23 Reading a token Next token is token '/' (1.1: ) syntax error Error: popping token '/' (1.1: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.1: ) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.1: ) Shifting token '/' (1.1: ) Entering state 23 Reading a token Next token is token '/' (1.1: ) syntax error Error: popping token '/' (1.1: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.1: ) ./calc.at:1407: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1407: cat stderr input: | error ./calc.at:1407: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) syntax error Cleanup: discarding lookahead token "invalid token" (1.1: ) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) syntax error Cleanup: discarding lookahead token "invalid token" (1.1: ) ./calc.at:1407: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1407: cat stderr input: | 1 = 2 = 3 ./calc.at:1407: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 28 Reading a token Next token is token '=' (1.1: ) syntax error Error: popping nterm exp (1.1: 2) Error: popping token '=' (1.1: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.1: ) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 28 Reading a token Next token is token '=' (1.1: ) syntax error Error: popping nterm exp (1.1: 2) Error: popping token '=' (1.1: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.1: ) ./calc.at:1407: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1407: cat stderr input: | | +1 ./calc.at:1407: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Reducing stack 0 by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token '+' (1.1: ) syntax error Error: popping nterm input (1.1: ) Cleanup: discarding lookahead token '+' (1.1: ) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 3 Reducing stack 0 by rule 3 (line 76): $1 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Next token is token '+' (1.1: ) syntax error Error: popping nterm input (1.1: ) Cleanup: discarding lookahead token '+' (1.1: ) ./calc.at:1407: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1407: cat stderr ./calc.at:1407: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Reading a token Now at end of input. syntax error Cleanup: discarding lookahead token "end of input" (1.1: ) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. syntax error Cleanup: discarding lookahead token "end of input" (1.1: ) ./calc.at:1407: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1407: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1407: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 30 Reading a token Next token is token '+' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 2) Entering state 12 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 30 Reading a token Next token is token '+' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 2) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 3) Entering state 12 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token ')' (1.1: ) syntax error Error: popping token '+' (1.1: ) Error: popping nterm exp (1.1: 3) Shifting token error (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Reading a token Next token is token '+' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Reading a token Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Reading a token Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Reading a token Next token is token '+' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 2222) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 3333) Entering state 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 31 Reading a token Next token is token '*' (1.1: ) Reducing stack 0 by rule 9 (line 100): $1 = nterm exp (1.1: 1) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 12 Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Reading a token Next token is token '*' (1.1: ) syntax error Error: popping token '*' (1.1: ) Error: popping nterm exp (1.1: 2) Shifting token error (1.1: ) Entering state 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 3333) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 4444) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 4444) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) error: 4444 != 1 -> $$ = nterm exp (1.1: 4444) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 4444) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 30 Reading a token Next token is token '+' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 2) Entering state 12 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 30 Reading a token Next token is token '+' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 2) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 3) Entering state 12 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token ')' (1.1: ) syntax error Error: popping token '+' (1.1: ) Error: popping nterm exp (1.1: 3) Shifting token error (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Reading a token Next token is token '+' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Reading a token Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Reading a token Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Reading a token Next token is token '+' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 2222) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 3333) Entering state 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 31 Reading a token Next token is token '*' (1.1: ) Reducing stack 0 by rule 9 (line 100): $1 = nterm exp (1.1: 1) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 12 Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Reading a token Next token is token '*' (1.1: ) syntax error Error: popping token '*' (1.1: ) Error: popping nterm exp (1.1: 2) Shifting token error (1.1: ) Entering state 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 3333) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 4444) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 4444) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) error: 4444 != 1 -> $$ = nterm exp (1.1: 4444) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 4444) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1407: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1407: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1407: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 16 Reducing stack 0 by rule 16 (line 121): $1 = token '!' (1.1: ) $2 = token '!' (1.1: ) Shifting token error (1.1: ) Entering state 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Reading a token Next token is token "number" (1.1: 2) syntax error Error: popping nterm exp (1.1: 1) Shifting token error (1.1: ) Entering state 11 Next token is token "number" (1.1: 2) Error: discarding token "number" (1.1: 2) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 2222) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) error: 2222 != 1 -> $$ = nterm exp (1.1: 2222) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 2222) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 16 Reducing stack 0 by rule 16 (line 121): $1 = token '!' (1.1: ) $2 = token '!' (1.1: ) Shifting token error (1.1: ) Entering state 11 Reading a token Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Reading a token Next token is token "number" (1.1: 2) syntax error Error: popping nterm exp (1.1: 1) Shifting token error (1.1: ) Entering state 11 Next token is token "number" (1.1: 2) Error: discarding token "number" (1.1: 2) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 2222) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) error: 2222 != 1 -> $$ = nterm exp (1.1: 2222) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 2222) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1407: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1407: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1407: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 9 Reducing stack 0 by rule 15 (line 120): $1 = token '-' (1.1: ) $2 = token error (1.1: ) Shifting token error (1.1: ) Entering state 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Reading a token Next token is token "number" (1.1: 2) syntax error Error: popping nterm exp (1.1: 1) Shifting token error (1.1: ) Entering state 11 Next token is token "number" (1.1: 2) Error: discarding token "number" (1.1: 2) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 2222) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) error: 2222 != 1 -> $$ = nterm exp (1.1: 2222) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 2222) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 2 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 9 Reducing stack 0 by rule 15 (line 120): $1 = token '-' (1.1: ) $2 = token error (1.1: ) Shifting token error (1.1: ) Entering state 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Reading a token Next token is token "number" (1.1: 2) syntax error Error: popping nterm exp (1.1: 1) Shifting token error (1.1: ) Entering state 11 Next token is token "number" (1.1: 2) Error: discarding token "number" (1.1: 2) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 2222) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1) error: 2222 != 1 -> $$ = nterm exp (1.1: 2222) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 2222) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1407: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1407: cat stderr input: | (* *) + (*) + (*) ./calc.at:1407: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Reading a token Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Reading a token Next token is token '+' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 2222) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 3333) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 3333) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Reading a token Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Reading a token Next token is token '+' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.1: ) syntax error Shifting token error (1.1: ) Entering state 11 Next token is token '*' (1.1: ) Error: discarding token '*' (1.1: ) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 2222) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 3333) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 3333) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1407: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1407: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1407: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Reading a token Next token is token '+' (1.1: ) Reducing stack 0 by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Next token is token '+' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 14 Reducing stack 0 by rule 17 (line 122): $1 = token '!' (1.1: ) $2 = token '+' (1.1: ) Cleanup: popping token '+' (1.1: ) Cleanup: popping nterm exp (1.1: 7) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Reading a token Next token is token '+' (1.1: ) Reducing stack 0 by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Next token is token '+' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 14 Reducing stack 0 by rule 17 (line 122): $1 = token '!' (1.1: ) $2 = token '+' (1.1: ) Cleanup: popping token '+' (1.1: ) Cleanup: popping nterm exp (1.1: 7) input: | 1 + 2 * 3 + !- ++ ./calc.at:1407: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Reading a token Next token is token '+' (1.1: ) Reducing stack 0 by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Next token is token '+' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 13 Reducing stack 0 by rule 18 (line 123): $1 = token '!' (1.1: ) $2 = token '-' (1.1: ) Cleanup: popping token '+' (1.1: ) Cleanup: popping nterm exp (1.1: 7) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Reading a token Next token is token '+' (1.1: ) Reducing stack 0 by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Next token is token '+' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 13 Reducing stack 0 by rule 18 (line 123): $1 = token '!' (1.1: ) $2 = token '-' (1.1: ) Cleanup: popping token '+' (1.1: ) Cleanup: popping nterm exp (1.1: 7) ./calc.at:1407: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1407: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1407: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Reading a token Next token is token '+' (1.1: ) Reducing stack 0 by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Next token is token '+' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 15 Reducing stack 0 by rule 19 (line 124): $1 = token '!' (1.1: ) $2 = token '*' (1.1: ) memory exhausted Cleanup: popping token '+' (1.1: ) Cleanup: popping nterm exp (1.1: 7) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token "number" (1.1: 2) Shifting token "number" (1.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2) -> $$ = nterm exp (1.1: 2) Entering state 30 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 22 Reading a token Next token is token "number" (1.1: 3) Shifting token "number" (1.1: 3) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 3) -> $$ = nterm exp (1.1: 3) Entering state 31 Reading a token Next token is token '+' (1.1: ) Reducing stack 0 by rule 9 (line 100): $1 = nterm exp (1.1: 2) $2 = token '*' (1.1: ) $3 = nterm exp (1.1: 3) -> $$ = nterm exp (1.1: 6) Entering state 30 Next token is token '+' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 6) -> $$ = nterm exp (1.1: 7) Entering state 8 Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '!' (1.1: ) Shifting token '!' (1.1: ) Entering state 5 Reading a token Next token is token '*' (1.1: ) Shifting token '*' (1.1: ) Entering state 15 Reducing stack 0 by rule 19 (line 124): $1 = token '!' (1.1: ) $2 = token '*' (1.1: ) memory exhausted Cleanup: popping token '+' (1.1: ) Cleanup: popping nterm exp (1.1: 7) ./calc.at:1407: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1407: cat stderr input: | (#) + (#) = 2222 ./calc.at:1407: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error (1.1: ) Shifting token error (1.1: ) Entering state 11 Next token is token error (1.1: ) Error: discarding token error (1.1: ) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error (1.1: ) Shifting token error (1.1: ) Entering state 11 Next token is token error (1.1: ) Error: discarding token error (1.1: ) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 2222) Shifting token "number" (1.1: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2222) -> $$ = nterm exp (1.1: 2222) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 2222) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 2222) -> $$ = nterm exp (1.1: 2222) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 2222) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error (1.1: ) Shifting token error (1.1: ) Entering state 11 Next token is token error (1.1: ) Error: discarding token error (1.1: ) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error (1.1: ) Shifting token error (1.1: ) Entering state 11 Next token is token error (1.1: ) Error: discarding token error (1.1: ) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 30 Reading a token Next token is token '=' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1111) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 2222) Entering state 8 Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 2222) Shifting token "number" (1.1: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 2222) -> $$ = nterm exp (1.1: 2222) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 2222) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 2222) -> $$ = nterm exp (1.1: 2222) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 2222) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1407: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1407: cat stderr input: | (1 + #) = 1111 ./calc.at:1407: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token syntax error: invalid character: '#' Next token is token error (1.1: ) Error: popping token '+' (1.1: ) Error: popping nterm exp (1.1: 1) Shifting token error (1.1: ) Entering state 11 Next token is token error (1.1: ) Error: discarding token error (1.1: ) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 1111) Shifting token "number" (1.1: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 1111) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 1111) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token syntax error: invalid character: '#' Next token is token error (1.1: ) Error: popping token '+' (1.1: ) Error: popping nterm exp (1.1: 1) Shifting token error (1.1: ) Entering state 11 Next token is token error (1.1: ) Error: discarding token error (1.1: ) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 1111) Shifting token "number" (1.1: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 1111) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 1111) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1407: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1407: cat stderr input: | (# + 1) = 1111 ./calc.at:1407: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error (1.1: ) Shifting token error (1.1: ) Entering state 11 Next token is token error (1.1: ) Error: discarding token error (1.1: ) Reading a token Next token is token '+' (1.1: ) Error: discarding token '+' (1.1: ) Reading a token Next token is token "number" (1.1: 1) Error: discarding token "number" (1.1: 1) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 1111) Shifting token "number" (1.1: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 1111) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 1111) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error (1.1: ) Shifting token error (1.1: ) Entering state 11 Next token is token error (1.1: ) Error: discarding token error (1.1: ) Reading a token Next token is token '+' (1.1: ) Error: discarding token '+' (1.1: ) Reading a token Next token is token "number" (1.1: 1) Error: discarding token "number" (1.1: 1) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 1111) Shifting token "number" (1.1: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 1111) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 1111) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1407: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1407: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1407: $PREPARSER ./calc input stderr: stdout: ./calc.at:1408: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token syntax error: invalid character: '#' Next token is token error (1.1: ) Error: popping token '+' (1.1: ) Error: popping nterm exp (1.1: 1) Shifting token error (1.1: ) Entering state 11 Next token is token error (1.1: ) Error: discarding token error (1.1: ) Reading a token Next token is token '+' (1.1: ) Error: discarding token '+' (1.1: ) Reading a token Next token is token "number" (1.1: 1) Error: discarding token "number" (1.1: 1) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 1111) Shifting token "number" (1.1: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 1111) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 1111) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token syntax error: invalid character: '#' Next token is token error (1.1: ) Error: popping token '+' (1.1: ) Error: popping nterm exp (1.1: 1) Shifting token error (1.1: ) Entering state 11 Next token is token error (1.1: ) Error: discarding token error (1.1: ) Reading a token Next token is token '+' (1.1: ) Error: discarding token '+' (1.1: ) Reading a token Next token is token "number" (1.1: 1) Error: discarding token "number" (1.1: 1) Reading a token Next token is token ')' (1.1: ) Entering state 11 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 26 Reducing stack 0 by rule 14 (line 119): $1 = token '(' (1.1: ) $2 = token error (1.1: ) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 1111) Entering state 8 Reading a token Next token is token '=' (1.1: ) Shifting token '=' (1.1: ) Entering state 19 Reading a token Next token is token "number" (1.1: 1111) Shifting token "number" (1.1: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 28 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 6 (line 82): $1 = nterm exp (1.1: 1111) $2 = token '=' (1.1: ) $3 = nterm exp (1.1: 1111) -> $$ = nterm exp (1.1: 1111) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 1111) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1408: $PREPARSER ./calc input ./calc.at:1407: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 17 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1407: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 17 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) input: | (1 + 1) / (1 - 1) ./calc.at:1407: $PREPARSER ./calc input input: | 1 2 ./calc.at:1408: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.3: 2) stderr: ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 30 Reading a token Next token is token ')' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 2) Entering state 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Reducing stack 0 by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: 2) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 2) Entering state 8 Reading a token Next token is token '/' (1.1: ) Shifting token '/' (1.1: ) Entering state 23 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 29 Reading a token Next token is token ')' (1.1: ) Reducing stack 0 by rule 8 (line 99): $1 = nterm exp (1.1: 1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 0) Entering state 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Reducing stack 0 by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: 0) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 0) Entering state 32 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 10 (line 101): $1 = nterm exp (1.1: 2) $2 = token '/' (1.1: ) $3 = nterm exp (1.1: 0) error: null divisor -> $$ = nterm exp (1.1: 2) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 2) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1407: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.3: 2) Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Reading a token Next token is token '+' (1.1: ) Shifting token '+' (1.1: ) Entering state 21 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 30 Reading a token Next token is token ')' (1.1: ) Reducing stack 0 by rule 7 (line 98): $1 = nterm exp (1.1: 1) $2 = token '+' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 2) Entering state 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Reducing stack 0 by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: 2) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 2) Entering state 8 Reading a token Next token is token '/' (1.1: ) Shifting token '/' (1.1: ) Entering state 23 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 12 Reading a token Next token is token '-' (1.1: ) Shifting token '-' (1.1: ) Entering state 20 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 81): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 29 Reading a token Next token is token ')' (1.1: ) Reducing stack 0 by rule 8 (line 99): $1 = nterm exp (1.1: 1) $2 = token '-' (1.1: ) $3 = nterm exp (1.1: 1) -> $$ = nterm exp (1.1: 0) Entering state 12 Next token is token ')' (1.1: ) Shifting token ')' (1.1: ) Entering state 27 Reducing stack 0 by rule 13 (line 118): $1 = token '(' (1.1: ) $2 = nterm exp (1.1: 0) $3 = token ')' (1.1: ) -> $$ = nterm exp (1.1: 0) Entering state 32 Reading a token Next token is token '\n' (1.1: ) Reducing stack 0 by rule 10 (line 101): $1 = nterm exp (1.1: 2) $2 = token '/' (1.1: ) $3 = nterm exp (1.1: 0) error: null divisor -> $$ = nterm exp (1.1: 2) Entering state 8 Next token is token '\n' (1.1: ) Shifting token '\n' (1.1: ) Entering state 25 Reducing stack 0 by rule 4 (line 77): $1 = nterm exp (1.1: 2) $2 = token '\n' (1.1: ) -> $$ = nterm line (1.1: ) Entering state 7 Reducing stack 0 by rule 1 (line 71): $1 = nterm line (1.1: ) -> $$ = nterm input (1.1: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (1.1: ) Entering state 17 Cleanup: popping token "end of input" (1.1: ) Cleanup: popping nterm input (1.1: ) ./calc.at:1408: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1407: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1408: cat stderr input: | 1//2 ./calc.at:1408: $PREPARSER ./calc input ./calc.at:1407: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) 515. calc.at:1407: ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ok stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1408: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1408: cat stderr input: | error ./calc.at:1408: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) ./calc.at:1408: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1408: cat stderr input: | 1 = 2 = 3 ./calc.at:1408: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 518. calc.at:1411: testing Calculator %glr-parser api.pure parse.error=verbose %debug %locations %header %name-prefix "calc" %verbose ... ./calc.at:1411: mv calc.y.tmp calc.y Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1411: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1408: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1408: cat stderr input: | | +1 ./calc.at:1408: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1408: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1408: cat stderr ./calc.at:1408: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) ./calc.at:1408: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1408: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1408: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS ./calc.at:1408: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1408: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1408: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Reducing stack 0 by rule 16 (line 128): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Reducing stack 0 by rule 16 (line 128): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: stdout: ./calc.at:1409: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h ./calc.at:1408: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1408: cat stderr input: input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 | (- *) + (1 2) = 1 ./calc.at:1408: $PREPARSER ./calc input ./calc.at:1409: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 127): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 127): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 17 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1408: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1408: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 123): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 124): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 17 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) input: | (* *) + (*) + (*) input: ./calc.at:1408: $PREPARSER ./calc input | 1 2 ./calc.at:1409: $PREPARSER ./calc input stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.3: 2) ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.3: 2) stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1409: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1408: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1409: cat stderr ./calc.at:1408: cat stderr input: | 1//2 ./calc.at:1409: $PREPARSER ./calc input stderr: input: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | 1 + 2 * 3 + !+ ++ ./calc.at:1408: $PREPARSER ./calc input stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 129): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1409: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: ./calc.at:1409: cat stderr Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 129): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) input: input: | error ./calc.at:1409: $PREPARSER ./calc input | 1 + 2 * 3 + !- ++ stderr: ./calc.at:1408: $PREPARSER ./calc input Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 130): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 130): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1409: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1408: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1409: cat stderr ./calc.at:1408: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1408: $PREPARSER ./calc input input: | 1 = 2 = 3 ./calc.at:1409: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Reducing stack 0 by rule 19 (line 131): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) stderr: ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Reducing stack 0 by rule 19 (line 131): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1408: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1409: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1408: cat stderr ./calc.at:1409: cat stderr input: | (#) + (#) = 2222 ./calc.at:1408: $PREPARSER ./calc input stderr: input: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) | | +1 ./calc.at:1409: $PREPARSER ./calc input ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1408: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1408: cat stderr ./calc.at:1409: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (1 + #) = 1111 ./calc.at:1408: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1409: cat stderr ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1409: $PREPARSER ./calc /dev/null stderr: stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) ./calc.at:1408: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1409: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1408: cat stderr ./calc.at:1409: cat stderr input: | (# + 1) = 1111 input: ./calc.at:1408: $PREPARSER ./calc input | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1409: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1408: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1409: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1408: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1409: cat stderr ./calc.at:1408: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (!!) + (1 2) = 1 ./calc.at:1409: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Reducing stack 0 by rule 16 (line 128): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1408: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Reducing stack 0 by rule 16 (line 128): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1408: cat stderr ./calc.at:1409: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (1 + 1) / (1 - 1) ./calc.at:1408: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 108): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1408: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1409: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 108): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: ./calc.at:1408: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | (- *) + (1 2) = 1 ./calc.at:1409: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 127): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1408: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 127): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) 516. calc.at:1408: ok ./calc.at:1409: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1409: cat stderr input: | (* *) + (*) + (*) ./calc.at:1409: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1409: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1409: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1409: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 129): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 519. calc.at:1413: testing Calculator %glr-parser api.pure parse.error=verbose %debug %locations %header %name-prefix "calc" %verbose %parse-param {semantic_value *result}{int *count}{int *nerrs} ... stderr: ./calc.at:1413: mv calc.y.tmp calc.y Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 129): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) input: | 1 + 2 * 3 + !- ++ ./calc.at:1409: $PREPARSER ./calc input ./calc.at:1413: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 130): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 130): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1409: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1409: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1409: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Reducing stack 0 by rule 19 (line 131): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 107): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Reducing stack 0 by rule 19 (line 131): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1409: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1409: cat stderr input: | (#) + (#) = 2222 ./calc.at:1409: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1409: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1409: cat stderr input: | (1 + #) = 1111 ./calc.at:1409: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1413: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1409: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1409: cat stderr input: | (# + 1) = 1111 ./calc.at:1409: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1409: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1409: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1409: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 126): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1409: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1409: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1409: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 108): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1409: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 105): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 106): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Reducing stack 0 by rule 13 (line 125): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 108): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1409: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1409: cat stderr 517. calc.at:1409: ok 520. calc.at:1414: testing Calculator %glr-parser api.pure parse.error=verbose %debug %locations %header api.prefix={calc} %verbose %parse-param {semantic_value *result}{int *count}{int *nerrs} ... ./calc.at:1414: mv calc.y.tmp calc.y ./calc.at:1414: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y ./calc.at:1414: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS stderr: stdout: ./calc.at:1411: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1411: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 17 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 17 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) input: | 1 2 ./calc.at:1411: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.3: 2) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.3: 2) ./calc.at:1411: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1411: cat stderr input: | 1//2 ./calc.at:1411: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1411: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1411: cat stderr input: | error ./calc.at:1411: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) ./calc.at:1411: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1411: cat stderr input: | 1 = 2 = 3 ./calc.at:1411: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1411: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1411: cat stderr input: | | +1 ./calc.at:1411: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1411: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1411: cat stderr ./calc.at:1411: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) ./calc.at:1411: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1411: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1411: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1411: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1411: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Reducing stack 0 by rule 16 (line 116): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Reducing stack 0 by rule 16 (line 116): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1411: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1411: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 115): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 115): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1411: cat stderr input: | (* *) + (*) + (*) ./calc.at:1411: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1411: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1411: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 117): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 117): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) input: | 1 + 2 * 3 + !- ++ ./calc.at:1411: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 118): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 118): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1411: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1411: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1411: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Reducing stack 0 by rule 19 (line 119): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Reducing stack 0 by rule 19 (line 119): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1411: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1411: cat stderr input: | (#) + (#) = 2222 ./calc.at:1411: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1411: cat stderr input: | (1 + #) = 1111 ./calc.at:1411: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1411: cat stderr input: | (# + 1) = 1111 ./calc.at:1411: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1411: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1411: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1411: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1411: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 102): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 102): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1411: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: stdout: ./calc.at:1413: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h ./calc.at:1411: cat stderr 518. calc.at:1411: ok input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1413: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 17 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 17 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) input: | 1 2 ./calc.at:1413: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.3: 2) ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.3: 2) ./calc.at:1413: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1413: cat stderr input: 521. calc.at:1416: testing Calculator %glr-parser %no-lines api.pure parse.error=verbose %debug %locations %header api.prefix={calc} %verbose %parse-param {semantic_value *result}{int *count}{int *nerrs} ... | 1//2 ./calc.at:1413: $PREPARSER ./calc input ./calc.at:1416: mv calc.y.tmp calc.y stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1416: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.c calc.y stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1413: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1413: cat stderr input: | error ./calc.at:1413: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) ./calc.at:1413: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1413: cat stderr input: | 1 = 2 = 3 ./calc.at:1413: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1413: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1413: cat stderr input: | | +1 ./calc.at:1413: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1413: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1416: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o calc calc.c calc-lex.c calc-main.c $LIBS ./calc.at:1413: cat stderr ./calc.at:1413: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) ./calc.at:1413: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1413: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1413: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1413: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: ./calc.at:1413: cat stderr stdout: ./calc.at:1414: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h input: | (!!) + (1 2) = 1 ./calc.at:1413: $PREPARSER ./calc input stderr: input: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Reducing stack 0 by rule 16 (line 116): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1414: $PREPARSER ./calc input ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Reducing stack 0 by rule 16 (line 116): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1413: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1413: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 17 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) input: ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | (- *) + (1 2) = 1 ./calc.at:1413: $PREPARSER ./calc input stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 115): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 17 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 2 ./calc.at:1414: $PREPARSER ./calc input stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 115): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.3: 2) ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1413: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.3: 2) ./calc.at:1414: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1413: cat stderr ./calc.at:1414: cat stderr input: | 1//2 ./calc.at:1414: $PREPARSER ./calc input input: | (* *) + (*) + (*) ./calc.at:1413: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) stderr: ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1414: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1414: cat stderr input: ./calc.at:1413: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | error ./calc.at:1414: $PREPARSER ./calc input ./calc.at:1413: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) | 1 + 2 * 3 + !+ ++ ./calc.at:1413: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 117): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1414: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1414: cat stderr Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 117): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) input: | 1 + 2 * 3 + !- ++ ./calc.at:1413: $PREPARSER ./calc input input: | 1 = 2 = 3 ./calc.at:1414: $PREPARSER ./calc input stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 118): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 118): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1414: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1413: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1413: cat stderr ./calc.at:1414: cat stderr input: input: | 1 + 2 * 3 + !* ++ | | +1 ./calc.at:1414: $PREPARSER ./calc input ./calc.at:1413: $PREPARSER ./calc input stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Reducing stack 0 by rule 19 (line 119): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Reducing stack 0 by rule 19 (line 119): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1414: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1413: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1414: cat stderr ./calc.at:1414: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) ./calc.at:1413: cat stderr ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (#) + (#) = 2222 stderr: ./calc.at:1413: $PREPARSER ./calc input Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1414: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1414: cat stderr ./calc.at:1413: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1413: cat stderr input: input: | (1 + #) = 1111 | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1413: $PREPARSER ./calc input ./calc.at:1414: $PREPARSER ./calc input stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1413: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1414: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1413: cat stderr ./calc.at:1414: cat stderr input: input: | (# + 1) = 1111 ./calc.at:1413: $PREPARSER ./calc input | (!!) + (1 2) = 1 ./calc.at:1414: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Reducing stack 0 by rule 16 (line 116): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Reducing stack 0 by rule 16 (line 116): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1413: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1413: cat stderr ./calc.at:1414: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (1 + # + 1) = 1111 ./calc.at:1413: $PREPARSER ./calc input stderr: ./calc.at:1414: cat stderr Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | (- *) + (1 2) = 1 ./calc.at:1414: $PREPARSER ./calc input ./calc.at:1413: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 115): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1413: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 115): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | (1 + 1) / (1 - 1) ./calc.at:1413: $PREPARSER ./calc input ./calc.at:1414: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 102): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1414: cat stderr ./calc.at:1413: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 102): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | (* *) + (*) + (*) ./calc.at:1414: $PREPARSER ./calc input ./calc.at:1413: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1413: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) 519. calc.at:1413: ok ./calc.at:1414: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1414: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1414: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 117): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 117): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) input: | 1 + 2 * 3 + !- ++ ./calc.at:1414: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 118): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 118): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1414: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1414: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1414: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Reducing stack 0 by rule 19 (line 119): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 522. calc.at:1426: testing Calculator lalr1.cc %header ... ./calc.at:1426: mv calc.y.tmp calc.y stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Reducing stack 0 by rule 19 (line 119): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1426: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1414: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1414: cat stderr input: | (#) + (#) = 2222 ./calc.at:1414: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1414: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1414: cat stderr input: | (1 + #) = 1111 ./calc.at:1414: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1414: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1414: cat stderr input: | (# + 1) = 1111 ./calc.at:1414: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1426: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc calc-lex.cc calc-main.cc $LIBS stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1414: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1414: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1414: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1414: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1414: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1414: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 102): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1414: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 102): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1414: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1414: cat stderr 520. calc.at:1414: ok 523. calc.at:1431: testing Calculator C++ ... ./calc.at:1431: mv calc.y.tmp calc.y ./calc.at:1431: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1431: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: stdout: ./calc.at:1416: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.c calc.h input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1416: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 17 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 28 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 21 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 30 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 22 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 31 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 19 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 28 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 24 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 33 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 19 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 28 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 24 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 33 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 19 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 28 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 19 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 28 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 20 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 29 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 20 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 29 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 19 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 111): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 28 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 20 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 20 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 29 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 29 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 19 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 28 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 24 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 33 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 24 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 33 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 33 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 19 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 28 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 24 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 33 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 24 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 33 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 112): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 19 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 28 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 18 Reducing stack 0 by rule 2 (line 79): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 17 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) input: | 1 2 ./calc.at:1416: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.3: 2) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.3: 2) ./calc.at:1416: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1416: cat stderr input: | 1//2 ./calc.at:1416: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 23 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1416: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1416: cat stderr input: | error ./calc.at:1416: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) ./calc.at:1416: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1416: cat stderr input: | 1 = 2 = 3 ./calc.at:1416: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 19 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 28 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1416: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1416: cat stderr input: | | +1 ./calc.at:1416: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 83): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1416: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1416: cat stderr ./calc.at:1416: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) ./calc.at:1416: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1416: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1416: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 21 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 30 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 21 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 30 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 21 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 30 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 21 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 22 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 31 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 22 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 30 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 19 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 28 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1416: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1416: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1416: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Reducing stack 0 by rule 16 (line 116): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 16 Reducing stack 0 by rule 16 (line 116): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 21 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 30 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1416: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1416: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1416: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 115): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 115): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 30 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 19 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 28 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1416: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1416: cat stderr input: | (* *) + (*) + (*) ./calc.at:1416: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 21 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 30 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 21 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 30 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1416: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1416: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1416: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 117): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 117): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) input: | 1 + 2 * 3 + !- ++ ./calc.at:1416: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 118): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 118): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1416: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1416: cat stderr input: | 1 + 2 * 3 + !* ++ ./calc.at:1416: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Reducing stack 0 by rule 19 (line 119): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 21 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 30 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 22 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 31 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 101): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 30 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 21 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '*' (1.14: ) Shifting token '*' (1.14: ) Entering state 15 Reducing stack 0 by rule 19 (line 119): $1 = token '!' (1.13: ) $2 = token '*' (1.14: ) 1.14: memory exhausted Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1416: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1416: cat stderr input: | (#) + (#) = 2222 ./calc.at:1416: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 21 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 19 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 28 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1416: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1416: cat stderr input: | (1 + #) = 1111 ./calc.at:1416: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1416: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1416: cat stderr input: | (# + 1) = 1111 ./calc.at:1416: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 19 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 28 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1416: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1416: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1416: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 26 Reducing stack 0 by rule 14 (line 114): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 19 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 28 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 89): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1416: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1416: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1416: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 102): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1416: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 21 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 30 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 99): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 23 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 20 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 88): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 29 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 100): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 27 Reducing stack 0 by rule 13 (line 113): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 32 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 102): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 25 Reducing stack 0 by rule 4 (line 84): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 78): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 17 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1416: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1416: cat stderr 521. calc.at:1416: ok 524. calc.at:1432: testing Calculator C++ %locations ... ./calc.at:1432: mv calc.y.tmp calc.y ./calc.at:1432: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y stderr: stdout: ./calc.at:1431: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc ./calc.at:1432: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1431: $PREPARSER ./calc input stderr: ./calc.at:1431: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1431: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1431: $PREPARSER ./calc input stderr: syntax error ./calc.at:1431: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1431: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1431: cat stderr input: | 1//2 ./calc.at:1431: $PREPARSER ./calc input stderr: syntax error ./calc.at:1431: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1431: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1431: cat stderr input: | error ./calc.at:1431: $PREPARSER ./calc input stderr: syntax error ./calc.at:1431: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1431: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1431: cat stderr input: | 1 = 2 = 3 ./calc.at:1431: $PREPARSER ./calc input stderr: syntax error ./calc.at:1431: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1431: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1431: cat stderr input: | | +1 ./calc.at:1431: $PREPARSER ./calc input stderr: syntax error ./calc.at:1431: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1431: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1431: cat stderr ./calc.at:1431: $PREPARSER ./calc /dev/null stderr: syntax error ./calc.at:1431: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1431: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1431: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1431: $PREPARSER ./calc input stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1431: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1431: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1431: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1431: $PREPARSER ./calc input stderr: syntax error error: 2222 != 1 ./calc.at:1431: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error error: 2222 != 1 ./calc.at:1431: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1431: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1431: $PREPARSER ./calc input stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1431: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1431: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1431: cat stderr input: | (* *) + (*) + (*) ./calc.at:1431: $PREPARSER ./calc input stderr: syntax error syntax error syntax error ./calc.at:1431: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error ./calc.at:1431: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1431: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1431: $PREPARSER ./calc input stderr: ./calc.at:1431: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1431: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1431: $PREPARSER ./calc input stderr: ./calc.at:1431: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1431: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1431: cat stderr input: | (#) + (#) = 2222 ./calc.at:1431: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1431: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1431: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1431: cat stderr input: | (1 + #) = 1111 ./calc.at:1431: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1431: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1431: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1431: cat stderr input: | (# + 1) = 1111 ./calc.at:1431: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1431: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1431: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1431: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1431: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1431: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1431: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1431: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1431: $PREPARSER ./calc input stderr: error: null divisor ./calc.at:1431: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: error: null divisor ./calc.at:1431: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1431: cat stderr 523. calc.at:1431: ok 525. calc.at:1433: testing Calculator C++ %locations $NO_EXCEPTIONS_CXXFLAGS ... ./calc.at:1433: mv calc.y.tmp calc.y ./calc.at:1433: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1433: $CXX $CPPFLAGS $CXXFLAGS $NO_EXCEPTIONS_CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: stdout: ./calc.at:1426: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc calc.hh input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1426: $PREPARSER ./calc input stderr: ./calc.at:1426: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1426: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1426: $PREPARSER ./calc input stderr: syntax error ./calc.at:1426: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1426: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1426: cat stderr input: | 1//2 ./calc.at:1426: $PREPARSER ./calc input stderr: syntax error ./calc.at:1426: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1426: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1426: cat stderr input: | error ./calc.at:1426: $PREPARSER ./calc input stderr: syntax error ./calc.at:1426: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1426: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1426: cat stderr input: | 1 = 2 = 3 ./calc.at:1426: $PREPARSER ./calc input stderr: syntax error ./calc.at:1426: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1426: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1426: cat stderr input: | | +1 ./calc.at:1426: $PREPARSER ./calc input stderr: syntax error ./calc.at:1426: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1426: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1426: cat stderr ./calc.at:1426: $PREPARSER ./calc /dev/null stderr: syntax error ./calc.at:1426: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1426: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1426: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1426: $PREPARSER ./calc input stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1426: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1426: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1426: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1426: $PREPARSER ./calc input stderr: syntax error error: 2222 != 1 ./calc.at:1426: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error error: 2222 != 1 ./calc.at:1426: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1426: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1426: $PREPARSER ./calc input stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1426: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1426: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1426: cat stderr input: | (* *) + (*) + (*) ./calc.at:1426: $PREPARSER ./calc input stderr: syntax error syntax error syntax error ./calc.at:1426: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error ./calc.at:1426: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1426: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1426: $PREPARSER ./calc input stderr: ./calc.at:1426: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1426: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1426: $PREPARSER ./calc input stderr: ./calc.at:1426: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1426: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1426: cat stderr input: | (#) + (#) = 2222 ./calc.at:1426: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1426: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1426: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: ./calc.at:1426: cat stderr stdout: ./calc.at:1432: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: input: | (1 + #) = 1111 ./calc.at:1426: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1426: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1432: $PREPARSER ./calc input stderr: ./calc.at:1432: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' stderr: ./calc.at:1432: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1432: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1432: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1426: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1426: cat stderr ./calc.at:1432: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (# + 1) = 1111 ./calc.at:1426: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1426: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1432: cat stderr stderr: input: syntax error: invalid character: '#' | 1//2 ./calc.at:1432: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1432: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1426: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1432: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1426: cat stderr input: ./calc.at:1432: cat stderr | (1 + # + 1) = 1111 ./calc.at:1426: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' input: ./calc.at:1426: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | error ./calc.at:1432: $PREPARSER ./calc input stderr: stderr: 1.1: syntax error syntax error: invalid character: '#' ./calc.at:1432: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1432: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1426: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1426: cat stderr ./calc.at:1432: cat stderr input: input: | 1 = 2 = 3 ./calc.at:1432: $PREPARSER ./calc input | (1 + 1) / (1 - 1) ./calc.at:1426: $PREPARSER ./calc input stderr: 1.7: syntax error ./calc.at:1432: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: error: null divisor ./calc.at:1426: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error stderr: error: null divisor ./calc.at:1432: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1426: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1432: cat stderr input: ./calc.at:1426: cat stderr | | +1 ./calc.at:1432: $PREPARSER ./calc input stderr: 2.1: syntax error 522. calc.at:1426: ./calc.at:1432: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ok stderr: 2.1: syntax error ./calc.at:1432: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1432: cat stderr ./calc.at:1432: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error ./calc.at:1432: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1432: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1432: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1432: $PREPARSER ./calc input 526. calc.at:1434: testing Calculator C++ %locations api.location.type={Span} ... ./calc.at:1434: mv calc.y.tmp calc.y stderr: ./calc.at:1434: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1432: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1432: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1432: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1432: $PREPARSER ./calc input stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1432: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1432: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1432: cat stderr ./calc.at:1434: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS input: | (- *) + (1 2) = 1 ./calc.at:1432: $PREPARSER ./calc input stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1432: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1432: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1432: cat stderr input: | (* *) + (*) + (*) ./calc.at:1432: $PREPARSER ./calc input stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1432: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1432: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1432: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1432: $PREPARSER ./calc input stderr: ./calc.at:1432: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1432: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1432: $PREPARSER ./calc input stderr: ./calc.at:1432: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1432: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1432: cat stderr input: | (#) + (#) = 2222 ./calc.at:1432: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1432: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1432: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1432: cat stderr input: | (1 + #) = 1111 ./calc.at:1432: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1432: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1432: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1432: cat stderr input: | (# + 1) = 1111 ./calc.at:1432: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1432: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1432: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1432: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1432: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1432: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1432: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1432: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1432: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1432: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1432: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1432: cat stderr 524. calc.at:1432: ok 527. calc.at:1435: testing Calculator C++ %header %locations parse.error=verbose %name-prefix "calc" %verbose ... ./calc.at:1435: mv calc.y.tmp calc.y ./calc.at:1435: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1435: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc calc-lex.cc calc-main.cc $LIBS stderr: stdout: ./calc.at:1433: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1433: $PREPARSER ./calc input stderr: ./calc.at:1433: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1433: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1433: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1433: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1433: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1433: cat stderr input: | 1//2 ./calc.at:1433: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1433: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1433: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1433: cat stderr input: | error ./calc.at:1433: $PREPARSER ./calc input stderr: 1.1: syntax error ./calc.at:1433: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1433: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1433: cat stderr input: | 1 = 2 = 3 ./calc.at:1433: $PREPARSER ./calc input stderr: 1.7: syntax error ./calc.at:1433: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error ./calc.at:1433: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1433: cat stderr input: | | +1 ./calc.at:1433: $PREPARSER ./calc input stderr: 2.1: syntax error ./calc.at:1433: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error ./calc.at:1433: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1433: cat stderr ./calc.at:1433: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error ./calc.at:1433: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1433: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1433: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1433: $PREPARSER ./calc input stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1433: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1433: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1433: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1433: $PREPARSER ./calc input stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1433: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1433: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1433: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1433: $PREPARSER ./calc input stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1433: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1433: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1433: cat stderr input: | (* *) + (*) + (*) ./calc.at:1433: $PREPARSER ./calc input stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1433: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1433: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1433: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1433: $PREPARSER ./calc input stderr: ./calc.at:1433: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1433: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1433: $PREPARSER ./calc input stderr: ./calc.at:1433: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1433: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1433: cat stderr input: | (#) + (#) = 2222 ./calc.at:1433: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1433: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1433: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1433: cat stderr input: | (1 + #) = 1111 ./calc.at:1433: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1433: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1433: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1433: cat stderr input: | (# + 1) = 1111 ./calc.at:1433: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1433: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1433: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1433: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1433: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1433: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1433: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1433: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1433: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1433: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1433: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1433: cat stderr 525. calc.at:1433: ok 528. calc.at:1437: testing Calculator C++ %locations parse.error=verbose api.prefix={calc} %verbose ... ./calc.at:1437: mv calc.y.tmp calc.y ./calc.at:1437: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1437: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: stdout: ./calc.at:1434: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1434: $PREPARSER ./calc input stderr: ./calc.at:1434: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1434: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1434: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1434: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1434: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1434: cat stderr input: | 1//2 ./calc.at:1434: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1434: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1434: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1434: cat stderr input: | error ./calc.at:1434: $PREPARSER ./calc input stderr: 1.1: syntax error ./calc.at:1434: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1434: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1434: cat stderr input: | 1 = 2 = 3 ./calc.at:1434: $PREPARSER ./calc input stderr: 1.7: syntax error ./calc.at:1434: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error ./calc.at:1434: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1434: cat stderr input: | | +1 ./calc.at:1434: $PREPARSER ./calc input stderr: 2.1: syntax error ./calc.at:1434: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error ./calc.at:1434: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1434: cat stderr ./calc.at:1434: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error ./calc.at:1434: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1434: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1434: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1434: $PREPARSER ./calc input stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1434: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1434: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1434: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1434: $PREPARSER ./calc input stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1434: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1434: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1434: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1434: $PREPARSER ./calc input stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1434: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1434: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1434: cat stderr input: | (* *) + (*) + (*) ./calc.at:1434: $PREPARSER ./calc input stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1434: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1434: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1434: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1434: $PREPARSER ./calc input stderr: ./calc.at:1434: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1434: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1434: $PREPARSER ./calc input stderr: ./calc.at:1434: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1434: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1434: cat stderr input: | (#) + (#) = 2222 ./calc.at:1434: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1434: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1434: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1434: cat stderr input: | (1 + #) = 1111 ./calc.at:1434: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1434: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1434: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1434: cat stderr input: | (# + 1) = 1111 ./calc.at:1434: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1434: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1434: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1434: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1434: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1434: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1434: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1434: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1434: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1434: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1434: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1434: cat stderr 526. calc.at:1434: ok 529. calc.at:1438: testing Calculator C++ %locations parse.error=verbose %debug %name-prefix "calc" %verbose ... ./calc.at:1438: mv calc.y.tmp calc.y ./calc.at:1438: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1438: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: stdout: ./calc.at:1437: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1437: $PREPARSER ./calc input stderr: ./calc.at:1437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1437: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1437: $PREPARSER ./calc input stderr: 1.3: syntax error, unexpected number ./calc.at:1437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error, unexpected number ./calc.at:1437: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1437: cat stderr input: | 1//2 ./calc.at:1437: $PREPARSER ./calc input stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1437: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1437: cat stderr input: | error ./calc.at:1437: $PREPARSER ./calc input stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1437: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1437: cat stderr input: | 1 = 2 = 3 ./calc.at:1437: $PREPARSER ./calc input stderr: 1.7: syntax error, unexpected '=' ./calc.at:1437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error, unexpected '=' ./calc.at:1437: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1437: cat stderr input: | | +1 ./calc.at:1437: $PREPARSER ./calc input stderr: 2.1: syntax error, unexpected '+' ./calc.at:1437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error, unexpected '+' ./calc.at:1437: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1437: cat stderr ./calc.at:1437: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error, unexpected end of input ./calc.at:1437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error, unexpected end of input ./calc.at:1437: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1437: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1437: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1437: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1437: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1437: $PREPARSER ./calc input stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 ./calc.at:1437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 ./calc.at:1437: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1437: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1437: $PREPARSER ./calc input stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1437: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1437: cat stderr input: | (* *) + (*) + (*) ./calc.at:1437: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1437: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1437: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1437: $PREPARSER ./calc input stderr: ./calc.at:1437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1437: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1437: $PREPARSER ./calc input stderr: ./calc.at:1437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1437: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1437: cat stderr input: | (#) + (#) = 2222 ./calc.at:1437: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1437: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1437: cat stderr input: | (1 + #) = 1111 ./calc.at:1437: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1437: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1437: cat stderr input: | (# + 1) = 1111 ./calc.at:1437: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1437: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1437: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1437: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1437: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1437: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1437: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1437: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1437: cat stderr 528. calc.at:1437: ok 530. calc.at:1440: testing Calculator C++ %locations parse.error=verbose %debug api.prefix={calc} %verbose ... ./calc.at:1440: mv calc.y.tmp calc.y ./calc.at:1440: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1440: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: stdout: ./calc.at:1435: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc calc.hh input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1435: $PREPARSER ./calc input stderr: ./calc.at:1435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1435: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1435: $PREPARSER ./calc input stderr: 1.3: syntax error, unexpected number ./calc.at:1435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error, unexpected number ./calc.at:1435: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1435: cat stderr input: | 1//2 ./calc.at:1435: $PREPARSER ./calc input stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1435: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1435: cat stderr input: | error ./calc.at:1435: $PREPARSER ./calc input stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1435: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1435: cat stderr input: | 1 = 2 = 3 ./calc.at:1435: $PREPARSER ./calc input stderr: 1.7: syntax error, unexpected '=' ./calc.at:1435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error, unexpected '=' ./calc.at:1435: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1435: cat stderr input: | | +1 ./calc.at:1435: $PREPARSER ./calc input stderr: 2.1: syntax error, unexpected '+' ./calc.at:1435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error, unexpected '+' ./calc.at:1435: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1435: cat stderr ./calc.at:1435: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error, unexpected end of input ./calc.at:1435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error, unexpected end of input ./calc.at:1435: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1435: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1435: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1435: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1435: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1435: $PREPARSER ./calc input stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 ./calc.at:1435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 ./calc.at:1435: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1435: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1435: $PREPARSER ./calc input stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1435: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1435: cat stderr input: | (* *) + (*) + (*) ./calc.at:1435: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1435: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1435: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1435: $PREPARSER ./calc input stderr: ./calc.at:1435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1435: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1435: $PREPARSER ./calc input stderr: ./calc.at:1435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1435: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1435: cat stderr input: | (#) + (#) = 2222 ./calc.at:1435: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1435: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1435: cat stderr input: | (1 + #) = 1111 ./calc.at:1435: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1435: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1435: cat stderr input: | (# + 1) = 1111 ./calc.at:1435: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1435: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1435: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1435: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1435: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1435: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1435: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1435: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1435: cat stderr 527. calc.at:1435: ok 531. calc.at:1441: testing Calculator C++ %locations parse.error=verbose %debug api.prefix={calc} api.token.prefix={TOK_} %verbose ... ./calc.at:1441: mv calc.y.tmp calc.y ./calc.at:1441: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y stderr: stdout: ./calc.at:1438: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1438: $PREPARSER ./calc input ./calc.at:1441: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Stack now 0 6 8 20 29 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 20 29 21 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 20 29 21 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 20 29 21 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Stack now 0 6 8 20 29 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Stack now 0 6 8 20 29 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Stack now 0 6 2 10 23 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 23 1 Reducing stack by rule 5 (line 79): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Stack now 0 6 2 10 23 32 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 79): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 19 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 19 4 1 Reducing stack by rule 5 (line 79): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 19 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Stack now 0 6 8 19 4 12 19 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 19 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Stack now 0 6 8 19 4 12 19 28 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 19 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Stack now 0 6 8 19 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Stack now 0 6 8 23 32 23 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 23 32 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Stack now 0 6 8 23 32 23 32 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Stack now 0 6 8 23 32 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 79): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Stack now 0 6 4 12 23 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Stack now 0 6 4 12 23 32 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (14.1: ) Shifting token end of input (14.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Stack now 0 6 8 20 29 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 20 29 21 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 20 29 21 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 20 29 21 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Stack now 0 6 8 20 29 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Stack now 0 6 8 20 29 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Stack now 0 6 2 10 23 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 23 1 Reducing stack by rule 5 (line 79): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Stack now 0 6 2 10 23 32 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 79): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 19 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 19 4 1 Reducing stack by rule 5 (line 79): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 19 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Stack now 0 6 8 19 4 12 19 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 19 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Stack now 0 6 8 19 4 12 19 28 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 19 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Stack now 0 6 8 19 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Stack now 0 6 8 23 32 23 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 23 32 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Stack now 0 6 8 23 32 23 32 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Stack now 0 6 8 23 32 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 79): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Stack now 0 6 4 12 23 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Stack now 0 6 4 12 23 32 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (14.1: ) Shifting token end of input (14.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1438: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1438: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1438: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1438: cat stderr input: | 1//2 ./calc.at:1438: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1438: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1438: cat stderr input: | error ./calc.at:1438: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1438: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1438: cat stderr input: | 1 = 2 = 3 ./calc.at:1438: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 18 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 18 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1438: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1438: cat stderr input: | | +1 ./calc.at:1438: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1438: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1438: cat stderr ./calc.at:1438: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token end of input (1.1: ) 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token end of input (1.1: ) Stack now 0 ./calc.at:1438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token end of input (1.1: ) 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token end of input (1.1: ) Stack now 0 ./calc.at:1438: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1438: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1438: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 20 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 20 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 20 4 12 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Stack now 0 8 20 4 12 21 30 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 20 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 20 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 20 4 12 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Stack now 0 8 20 4 12 21 30 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1438: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1438: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1438: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Stack now 0 4 5 15 Reducing stack by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Stack now 0 4 5 15 Reducing stack by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1438: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1438: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1438: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1438: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1438: cat stderr input: | (* *) + (*) + (*) ./calc.at:1438: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 20 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 20 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 20 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 20 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1438: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1438: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1438: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 20 5 14 Reducing stack by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 20 5 14 Reducing stack by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1438: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1438: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 20 5 13 Reducing stack by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 20 5 13 Reducing stack by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1438: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1438: cat stderr input: | (#) + (#) = 2222 ./calc.at:1438: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 20 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.1-8: ) Stack now 0 8 20 4 Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.1-8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 20 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.1-8: ) Stack now 0 8 20 4 Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.1-8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1438: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1438: cat stderr input: | (1 + #) = 1111 ./calc.at:1438: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1438: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1438: cat stderr input: | (# + 1) = 1111 ./calc.at:1438: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.1-4: ) Stack now 0 4 Shifting token error (1.1-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.1-4: ) Stack now 0 4 Shifting token error (1.1-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1438: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1438: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1438: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1438: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1438: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1438: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Stack now 0 4 12 20 29 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 22 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 22 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 22 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Stack now 0 8 22 4 12 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 22 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 22 4 12 19 28 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 22 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 22 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Stack now 0 8 22 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Stack now 0 4 12 20 29 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 22 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 22 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 22 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Stack now 0 8 22 4 12 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 22 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 22 4 12 19 28 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 22 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 22 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Stack now 0 8 22 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1438: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1438: cat stderr 529. calc.at:1438: ok 532. calc.at:1443: testing Calculator C++ %header %locations parse.error=verbose %debug %name-prefix "calc" %verbose %parse-param {semantic_value *result}{int *count}{int *nerrs} ... ./calc.at:1443: mv calc.y.tmp calc.y ./calc.at:1443: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1443: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc calc-lex.cc calc-main.cc $LIBS stderr: stdout: ./calc.at:1440: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1440: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Stack now 0 6 8 20 29 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 20 29 21 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 20 29 21 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 20 29 21 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Stack now 0 6 8 20 29 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Stack now 0 6 8 20 29 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Stack now 0 6 2 10 23 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 23 1 Reducing stack by rule 5 (line 79): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Stack now 0 6 2 10 23 32 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 79): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 19 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 19 4 1 Reducing stack by rule 5 (line 79): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 19 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Stack now 0 6 8 19 4 12 19 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 19 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Stack now 0 6 8 19 4 12 19 28 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 19 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Stack now 0 6 8 19 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Stack now 0 6 8 23 32 23 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 23 32 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Stack now 0 6 8 23 32 23 32 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Stack now 0 6 8 23 32 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 79): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Stack now 0 6 4 12 23 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Stack now 0 6 4 12 23 32 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (14.1: ) Shifting token end of input (14.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1440: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Stack now 0 6 8 20 29 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 20 29 21 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 20 29 21 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 20 29 21 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Stack now 0 6 8 20 29 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Stack now 0 6 8 20 29 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Stack now 0 6 2 10 23 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 23 1 Reducing stack by rule 5 (line 79): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Stack now 0 6 2 10 23 32 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 79): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 19 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 19 4 1 Reducing stack by rule 5 (line 79): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 19 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Stack now 0 6 8 19 4 12 19 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 19 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Stack now 0 6 8 19 4 12 19 28 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 19 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Stack now 0 6 8 19 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Stack now 0 6 8 23 32 23 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 23 32 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Stack now 0 6 8 23 32 23 32 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Stack now 0 6 8 23 32 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 79): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Stack now 0 6 4 12 23 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Stack now 0 6 4 12 23 32 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (14.1: ) Shifting token end of input (14.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1440: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1440: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1440: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1440: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1440: cat stderr input: | 1//2 ./calc.at:1440: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1440: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1440: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1440: cat stderr input: | error ./calc.at:1440: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1440: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1440: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1440: cat stderr input: | 1 = 2 = 3 ./calc.at:1440: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 18 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1440: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 18 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1440: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1440: cat stderr input: | | +1 ./calc.at:1440: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1440: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1440: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1440: cat stderr ./calc.at:1440: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token end of input (1.1: ) 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token end of input (1.1: ) Stack now 0 ./calc.at:1440: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token end of input (1.1: ) 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token end of input (1.1: ) Stack now 0 ./calc.at:1440: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1440: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1440: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 20 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 20 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 20 4 12 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Stack now 0 8 20 4 12 21 30 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1440: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 20 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 20 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 20 4 12 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Stack now 0 8 20 4 12 21 30 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1440: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1440: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1440: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Stack now 0 4 5 15 Reducing stack by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1440: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Stack now 0 4 5 15 Reducing stack by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1440: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1440: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1440: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1440: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1440: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1440: cat stderr input: | (* *) + (*) + (*) ./calc.at:1440: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 20 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 20 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1440: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 20 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 20 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1440: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1440: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1440: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 20 5 14 Reducing stack by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1440: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 20 5 14 Reducing stack by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1440: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1440: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 20 5 13 Reducing stack by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1440: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 20 5 13 Reducing stack by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) stderr: stdout: ./calc.at:1441: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc ./calc.at:1440: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1440: cat stderr ./calc.at:1441: $PREPARSER ./calc input input: | (#) + (#) = 2222 ./calc.at:1440: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 20 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.1-8: ) Stack now 0 8 20 4 Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.1-8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1440: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 20 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.1-8: ) Stack now 0 8 20 4 Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.1-8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1440: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1440: cat stderr input: | (1 + #) = 1111 ./calc.at:1440: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Stack now 0 6 8 20 29 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 20 29 21 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 20 29 21 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 20 29 21 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Stack now 0 6 8 20 29 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Stack now 0 6 8 20 29 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Stack now 0 6 2 10 23 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 23 1 Reducing stack by rule 5 (line 79): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Stack now 0 6 2 10 23 32 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 79): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 19 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 19 4 1 Reducing stack by rule 5 (line 79): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 19 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Stack now 0 6 8 19 4 12 19 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 19 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Stack now 0 6 8 19 4 12 19 28 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 19 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Stack now 0 6 8 19 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Stack now 0 6 8 23 32 23 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 23 32 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Stack now 0 6 8 23 32 23 32 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Stack now 0 6 8 23 32 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 79): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Stack now 0 6 4 12 23 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Stack now 0 6 4 12 23 32 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (14.1: ) Shifting token end of input (14.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Stack now 0 6 8 20 29 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 20 29 21 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 20 29 21 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 20 29 21 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Stack now 0 6 8 20 29 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Stack now 0 6 8 20 29 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Stack now 0 6 2 10 23 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 23 1 Reducing stack by rule 5 (line 79): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Stack now 0 6 2 10 23 32 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 79): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 19 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 19 4 1 Reducing stack by rule 5 (line 79): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 19 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Stack now 0 6 8 19 4 12 19 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 19 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Stack now 0 6 8 19 4 12 19 28 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 19 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Stack now 0 6 8 19 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Stack now 0 6 8 23 32 23 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 23 32 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Stack now 0 6 8 23 32 23 32 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Stack now 0 6 8 23 32 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 79): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Stack now 0 6 4 12 23 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Stack now 0 6 4 12 23 32 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (14.1: ) Shifting token end of input (14.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1440: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | 1 2 ./calc.at:1441: $PREPARSER ./calc input stderr: ./calc.at:1440: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1440: cat stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 input: ./calc.at:1441: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | (# + 1) = 1111 ./calc.at:1440: $PREPARSER ./calc input ./calc.at:1441: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.1-4: ) Stack now 0 4 Shifting token error (1.1-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1440: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1//2 stderr: ./calc.at:1441: $PREPARSER ./calc input Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.1-4: ) Stack now 0 4 Shifting token error (1.1-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1440: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1440: cat stderr input: ./calc.at:1441: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | (1 + # + 1) = 1111 ./calc.at:1440: $PREPARSER ./calc input ./calc.at:1441: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: ./calc.at:1440: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | error ./calc.at:1441: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1440: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1441: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1440: cat stderr ./calc.at:1441: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1440: $PREPARSER ./calc input input: | 1 = 2 = 3 ./calc.at:1441: $PREPARSER ./calc input stderr: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 18 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Stack now 0 4 12 20 29 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 22 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 22 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 22 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Stack now 0 8 22 4 12 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 22 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 22 4 12 19 28 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 22 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 22 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Stack now 0 8 22 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1440: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 18 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Stack now 0 4 12 20 29 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 22 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 22 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 22 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Stack now 0 8 22 4 12 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 22 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 22 4 12 19 28 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 22 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 22 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Stack now 0 8 22 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1440: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1441: cat stderr ./calc.at:1440: cat stderr input: | | +1 ./calc.at:1441: $PREPARSER ./calc input 530. calc.at:1440: ok stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1441: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1441: cat stderr ./calc.at:1441: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token end of input (1.1: ) 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token end of input (1.1: ) Stack now 0 ./calc.at:1441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token end of input (1.1: ) 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token end of input (1.1: ) Stack now 0 ./calc.at:1441: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1441: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1441: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 20 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 20 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 20 4 12 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Stack now 0 8 20 4 12 21 30 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 20 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 20 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 20 4 12 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Stack now 0 8 20 4 12 21 30 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) 533. calc.at:1445: testing Calculator C++ parse.error=verbose %debug api.prefix={calc} %verbose %parse-param {semantic_value *result}{int *count}{int *nerrs} ... ./calc.at:1445: mv calc.y.tmp calc.y ./calc.at:1445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1441: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1441: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1441: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Stack now 0 4 5 15 Reducing stack by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Stack now 0 4 5 15 Reducing stack by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1441: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1441: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1441: cat stderr input: ./calc.at:1445: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS | (* *) + (*) + (*) ./calc.at:1441: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 20 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 20 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 20 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 20 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1441: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1441: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 20 5 14 Reducing stack by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 20 5 14 Reducing stack by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1441: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1441: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 20 5 13 Reducing stack by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 20 5 13 Reducing stack by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1441: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1441: cat stderr input: | (#) + (#) = 2222 ./calc.at:1441: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 20 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.1-8: ) Stack now 0 8 20 4 Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.1-8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 20 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.1-8: ) Stack now 0 8 20 4 Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.1-8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1441: cat stderr input: | (1 + #) = 1111 ./calc.at:1441: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1441: cat stderr input: | (# + 1) = 1111 ./calc.at:1441: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.1-4: ) Stack now 0 4 Shifting token error (1.1-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.1-4: ) Stack now 0 4 Shifting token error (1.1-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1441: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1441: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1441: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1441: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Stack now 0 4 12 20 29 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 22 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 22 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 22 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Stack now 0 8 22 4 12 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 22 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 22 4 12 19 28 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 22 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 22 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Stack now 0 8 22 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Stack now 0 4 12 20 29 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 22 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 22 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 22 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Stack now 0 8 22 4 12 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 22 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 22 4 12 19 28 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 22 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 22 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Stack now 0 8 22 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1441: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1441: cat stderr 531. calc.at:1441: ok 534. calc.at:1446: testing Calculator C++ %header %locations parse.error=verbose %debug api.prefix={calc} %verbose %parse-param {semantic_value *result}{int *count}{int *nerrs} ... ./calc.at:1446: mv calc.y.tmp calc.y ./calc.at:1446: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1446: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc calc-lex.cc calc-main.cc $LIBS stderr: stdout: ./calc.at:1443: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc calc.hh input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1443: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Stack now 0 6 8 20 29 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 20 29 21 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 20 29 21 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 20 29 21 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Stack now 0 6 8 20 29 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Stack now 0 6 8 20 29 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Stack now 0 6 2 10 23 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 23 1 Reducing stack by rule 5 (line 79): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Stack now 0 6 2 10 23 32 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 79): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 19 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 19 4 1 Reducing stack by rule 5 (line 79): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 19 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Stack now 0 6 8 19 4 12 19 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 19 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Stack now 0 6 8 19 4 12 19 28 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 19 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Stack now 0 6 8 19 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Stack now 0 6 8 23 32 23 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 23 32 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Stack now 0 6 8 23 32 23 32 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Stack now 0 6 8 23 32 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 79): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Stack now 0 6 4 12 23 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Stack now 0 6 4 12 23 32 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (14.1: ) Shifting token end of input (14.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1443: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Stack now 0 6 8 20 29 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 20 29 21 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 20 29 21 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 20 29 21 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Stack now 0 6 8 20 29 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Stack now 0 6 8 20 29 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Stack now 0 6 2 10 23 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 23 1 Reducing stack by rule 5 (line 79): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Stack now 0 6 2 10 23 32 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 79): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 19 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 19 4 1 Reducing stack by rule 5 (line 79): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 19 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Stack now 0 6 8 19 4 12 19 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 19 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Stack now 0 6 8 19 4 12 19 28 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 19 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Stack now 0 6 8 19 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Stack now 0 6 8 23 32 23 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 23 32 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Stack now 0 6 8 23 32 23 32 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Stack now 0 6 8 23 32 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 79): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Stack now 0 6 4 12 23 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Stack now 0 6 4 12 23 32 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (14.1: ) Shifting token end of input (14.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1443: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1443: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1443: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 stderr: stdout: ./calc.at:1443: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1445: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc ./calc.at:1443: cat stderr input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1445: $PREPARSER ./calc input input: | 1//2 ./calc.at:1443: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1443: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1443: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1443: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '=' () Reducing stack by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Stack now 0 8 20 29 Next token is token '=' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Stack now 0 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (7) Shifting token number (7) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (7) -> $$ = nterm exp (7) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Stack now 0 6 8 20 29 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 8 20 29 21 2 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Stack now 0 6 8 20 29 21 2 1 Reducing stack by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 10 Stack now 0 6 8 20 29 21 2 10 Reading a token Next token is token '=' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Stack now 0 6 8 20 29 21 30 Next token is token '=' () Reducing stack by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Stack now 0 6 8 20 29 Next token is token '=' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Stack now 0 6 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (5) Shifting token number (5) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (5) -> $$ = nterm exp (5) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Stack now 0 6 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Stack now 0 6 2 10 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 2 10 23 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Stack now 0 6 2 10 23 32 Reading a token Next token is token '=' () Reducing stack by rule 12 (line 103): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Stack now 0 6 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Stack now 0 6 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' () Shifting token ')' () Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' () Reducing stack by rule 12 (line 103): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Stack now 0 6 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 6 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Stack now 0 6 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Stack now 0 6 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '-' () Reducing stack by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Stack now 0 6 8 Next token is token '-' () Shifting token '-' () Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' () Reducing stack by rule 8 (line 91): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Stack now 0 6 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (4) Shifting token number (4) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4) -> $$ = nterm exp (4) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Stack now 0 6 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 6 8 19 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 8 19 4 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Stack now 0 6 8 19 4 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Stack now 0 6 8 19 4 12 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Stack now 0 6 8 19 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Stack now 0 6 8 19 4 12 19 28 Reading a token Next token is token ')' () Reducing stack by rule 8 (line 91): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Stack now 0 6 8 19 4 12 Next token is token ')' () Shifting token ')' () Entering state 26 Stack now 0 6 8 19 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' () Reducing stack by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Stack now 0 6 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Stack now 0 6 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Stack now 0 6 8 23 32 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Stack now 0 6 8 23 32 23 1 Reducing stack by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Stack now 0 6 8 23 32 23 32 Reading a token Next token is token '=' () Reducing stack by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Stack now 0 6 8 23 32 Next token is token '=' () Reducing stack by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Stack now 0 6 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (256) Shifting token number (256) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (256) -> $$ = nterm exp (256) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Stack now 0 6 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Stack now 0 6 4 12 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 4 12 23 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Stack now 0 6 4 12 23 32 Reading a token Next token is token ')' () Reducing stack by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' () Shifting token ')' () Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' () Reducing stack by rule 12 (line 103): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Stack now 0 6 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (64) Shifting token number (64) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (64) -> $$ = nterm exp (64) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Stack now 0 6 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | error ./calc.at:1443: $PREPARSER ./calc input stderr: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '=' () Reducing stack by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Stack now 0 8 20 29 Next token is token '=' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Stack now 0 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (7) Shifting token number (7) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (7) -> $$ = nterm exp (7) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Stack now 0 6 8 20 29 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 8 20 29 21 2 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Stack now 0 6 8 20 29 21 2 1 Reducing stack by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 10 Stack now 0 6 8 20 29 21 2 10 Reading a token Next token is token '=' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Stack now 0 6 8 20 29 21 30 Next token is token '=' () Reducing stack by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Stack now 0 6 8 20 29 Next token is token '=' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Stack now 0 6 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (5) Shifting token number (5) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (5) -> $$ = nterm exp (5) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Stack now 0 6 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Stack now 0 6 2 10 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 2 10 23 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Stack now 0 6 2 10 23 32 Reading a token Next token is token '=' () Reducing stack by rule 12 (line 103): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Stack now 0 6 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Stack now 0 6 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' () Shifting token ')' () Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' () Reducing stack by rule 12 (line 103): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Stack now 0 6 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 6 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Stack now 0 6 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Stack now 0 6 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '-' () Reducing stack by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Stack now 0 6 8 Next token is token '-' () Shifting token '-' () Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' () Reducing stack by rule 8 (line 91): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Stack now 0 6 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (4) Shifting token number (4) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4) -> $$ = nterm exp (4) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' () Reducing stack by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Stack now 0 6 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 6 8 19 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 8 19 4 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Stack now 0 6 8 19 4 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Stack now 0 6 8 19 4 12 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Stack now 0 6 8 19 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Stack now 0 6 8 19 4 12 19 28 Reading a token Next token is token ')' () Reducing stack by rule 8 (line 91): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Stack now 0 6 8 19 4 12 Next token is token ')' () Shifting token ')' () Entering state 26 Stack now 0 6 8 19 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' () Reducing stack by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Stack now 0 6 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Stack now 0 6 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Stack now 0 6 8 23 32 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Stack now 0 6 8 23 32 23 1 Reducing stack by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Stack now 0 6 8 23 32 23 32 Reading a token Next token is token '=' () Reducing stack by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Stack now 0 6 8 23 32 Next token is token '=' () Reducing stack by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Stack now 0 6 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (256) Shifting token number (256) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (256) -> $$ = nterm exp (256) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Stack now 0 6 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Stack now 0 6 4 12 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 6 4 12 23 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Stack now 0 6 4 12 23 32 Reading a token Next token is token ')' () Reducing stack by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' () Shifting token ')' () Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' () Reducing stack by rule 12 (line 103): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Stack now 0 6 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (64) Shifting token number (64) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (64) -> $$ = nterm exp (64) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Stack now 0 6 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1445: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1443: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 input: | 1 2 ./calc.at:1445: $PREPARSER ./calc input stderr: ./calc.at:1443: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Stack now 0 Cleanup: discarding lookahead token number (2) Stack now 0 ./calc.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Stack now 0 Cleanup: discarding lookahead token number (2) Stack now 0 ./calc.at:1443: cat stderr ./calc.at:1445: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 = 2 = 3 ./calc.at:1443: $PREPARSER ./calc input ./calc.at:1445: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 18 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 input: ./calc.at:1443: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | 1//2 ./calc.at:1445: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 18 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Stack now 0 8 22 Reading a token Next token is token '/' () syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' () Stack now 0 8 Error: popping nterm exp (1) Stack now 0 Cleanup: discarding lookahead token '/' () Stack now 0 ./calc.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1443: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Stack now 0 8 22 Reading a token Next token is token '/' () syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' () Stack now 0 8 Error: popping nterm exp (1) Stack now 0 Cleanup: discarding lookahead token '/' () Stack now 0 ./calc.at:1443: cat stderr ./calc.at:1445: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1445: cat stderr input: | | +1 ./calc.at:1443: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1443: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | error ./calc.at:1445: $PREPARSER ./calc input stderr: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token () syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token () Stack now 0 ./calc.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token () syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token () Stack now 0 ./calc.at:1443: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1443: cat stderr ./calc.at:1443: $PREPARSER ./calc /dev/null ./calc.at:1445: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token end of input (1.1: ) 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token end of input (1.1: ) Stack now 0 ./calc.at:1445: cat stderr ./calc.at:1443: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token end of input (1.1: ) 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token end of input (1.1: ) Stack now 0 input: | 1 = 2 = 3 ./calc.at:1445: $PREPARSER ./calc input ./calc.at:1443: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '=' () syntax error, unexpected '=' Error: popping nterm exp (2) Stack now 0 8 18 Error: popping token '=' () Stack now 0 8 Error: popping nterm exp (1) Stack now 0 Cleanup: discarding lookahead token '=' () Stack now 0 ./calc.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1443: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '=' () syntax error, unexpected '=' Error: popping nterm exp (2) Stack now 0 8 18 Error: popping token '=' () Stack now 0 8 Error: popping nterm exp (1) Stack now 0 Cleanup: discarding lookahead token '=' () Stack now 0 input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1443: $PREPARSER ./calc input ./calc.at:1445: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1445: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 20 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 20 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 20 4 12 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Stack now 0 8 20 4 12 21 30 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: ./calc.at:1443: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | | +1 ./calc.at:1445: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 20 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 20 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 20 4 12 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Stack now 0 8 20 4 12 21 30 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token '+' () syntax error, unexpected '+' Error: popping nterm input () Stack now 0 Cleanup: discarding lookahead token '+' () Stack now 0 ./calc.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1443: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token '+' () syntax error, unexpected '+' Error: popping nterm input () Stack now 0 Cleanup: discarding lookahead token '+' () Stack now 0 ./calc.at:1443: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1443: $PREPARSER ./calc input ./calc.at:1445: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1445: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Stack now 0 4 5 15 Reducing stack by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1445: $PREPARSER ./calc /dev/null ./calc.at:1443: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token end of input () syntax error, unexpected end of input Cleanup: discarding lookahead token end of input () Stack now 0 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Stack now 0 4 5 15 Reducing stack by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1443: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token end of input () syntax error, unexpected end of input Cleanup: discarding lookahead token end of input () Stack now 0 ./calc.at:1443: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1443: $PREPARSER ./calc input ./calc.at:1445: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1445: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: ./calc.at:1443: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1445: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 4 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Stack now 0 4 11 Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' () Reducing stack by rule 7 (line 90): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' () Stack now 0 8 20 4 12 Error: popping nterm exp (3) Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Stack now 0 8 Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' () Error: discarding token '*' () Error: popping token error () Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' () Error: discarding token '*' () Error: popping token error () Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' () Error: discarding token '*' () Error: popping token error () Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' () Reducing stack by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Stack now 0 8 Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 8 20 4 12 21 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 30 Stack now 0 8 20 4 12 21 30 Reading a token Next token is token '*' () Reducing stack by rule 9 (line 92): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '*' () Shifting token '*' () Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' () Stack now 0 8 20 4 12 Error: popping nterm exp (2) Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' () Error: discarding token '*' () Error: popping token error () Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' () Reducing stack by rule 7 (line 90): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Stack now 0 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1443: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 4 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Stack now 0 4 11 Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' () Reducing stack by rule 7 (line 90): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' () Stack now 0 8 20 4 12 Error: popping nterm exp (3) Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Stack now 0 8 Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' () Error: discarding token '*' () Error: popping token error () Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' () Error: discarding token '*' () Error: popping token error () Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' () Error: discarding token '*' () Error: popping token error () Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' () Reducing stack by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Stack now 0 8 Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 8 20 4 12 21 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 30 Stack now 0 8 20 4 12 21 30 Reading a token Next token is token '*' () Reducing stack by rule 9 (line 92): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '*' () Shifting token '*' () Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' () Stack now 0 8 20 4 12 Error: popping nterm exp (2) Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' () Error: discarding token '*' () Error: popping token error () Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' () Reducing stack by rule 7 (line 90): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Stack now 0 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1443: cat stderr input: | (* *) + (*) + (*) ./calc.at:1443: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 20 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 20 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1445: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1443: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1445: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 20 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 20 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1443: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (!!) + (1 2) = 1 ./calc.at:1445: $PREPARSER ./calc input ./calc.at:1443: cat stderr stderr: input: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Stack now 0 4 5 15 Reducing stack by rule 16 (line 107): $1 = token '!' () $2 = token '!' () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Next token is token number (2) Error: discarding token number (2) Error: popping token error () Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Stack now 0 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () | 1 + 2 * 3 + !+ ++ ./calc.at:1443: $PREPARSER ./calc input ./calc.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Stack now 0 4 5 15 Reducing stack by rule 16 (line 107): $1 = token '!' () $2 = token '!' () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Next token is token number (2) Error: discarding token number (2) Error: popping token error () Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Stack now 0 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 20 5 14 Reducing stack by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1443: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 20 5 14 Reducing stack by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1443: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1445: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 + 2 * 3 + !- ++ ./calc.at:1443: $PREPARSER ./calc input ./calc.at:1445: cat stderr stderr: input: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 20 5 13 Reducing stack by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1443: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | (- *) + (1 2) = 1 ./calc.at:1445: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 20 5 13 Reducing stack by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1443: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1443: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 106): $1 = token '-' () $2 = token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Next token is token '*' () Error: discarding token '*' () Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Next token is token number (2) Error: discarding token number (2) Error: popping token error () Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Stack now 0 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () input: ./calc.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | (#) + (#) = 2222 ./calc.at:1443: $PREPARSER ./calc input stderr: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 106): $1 = token '-' () $2 = token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Next token is token '*' () Error: discarding token '*' () Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Next token is token number (2) Error: discarding token number (2) Error: popping token error () Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Stack now 0 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 20 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.1-8: ) Stack now 0 8 20 4 Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.1-8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1443: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 20 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.1-8: ) Stack now 0 8 20 4 Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.1-8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1445: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1443: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1445: cat stderr ./calc.at:1443: cat stderr input: | (* *) + (*) + (*) ./calc.at:1445: $PREPARSER ./calc input input: | (1 + #) = 1111 ./calc.at:1443: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Stack now 0 4 11 Next token is token '*' () Error: discarding token '*' () Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' () Error: discarding token '*' () Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' () Error: discarding token '*' () Error: popping token error () Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Stack now 0 8 Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' () Error: discarding token '*' () Error: popping token error () Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '\n' () Reducing stack by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Stack now 0 4 11 Next token is token '*' () Error: discarding token '*' () Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' () Error: discarding token '*' () Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' () Error: discarding token '*' () Error: popping token error () Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Stack now 0 8 Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' () Error: discarding token '*' () Error: popping token error () Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '\n' () Reducing stack by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1443: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1445: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1443: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1445: cat stderr input: ./calc.at:1443: cat stderr | 1 + 2 * 3 + !+ ++ ./calc.at:1445: $PREPARSER ./calc input stderr: input: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' () Reducing stack by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Stack now 0 8 Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Stack now 0 8 20 5 14 Reducing stack by rule 17 (line 108): $1 = token '!' () $2 = token '+' () Stack now 0 8 20 Cleanup: popping token '+' () Cleanup: popping nterm exp (7) | (# + 1) = 1111 ./calc.at:1443: $PREPARSER ./calc input ./calc.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' () Reducing stack by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Stack now 0 8 Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Stack now 0 8 20 5 14 Reducing stack by rule 17 (line 108): $1 = token '!' () $2 = token '+' () Stack now 0 8 20 Cleanup: popping token '+' () Cleanup: popping nterm exp (7) stderr: ./calc.at:1445: $EGREP -c -v 'Return for a new token:|LAC:' stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.1-4: ) Stack now 0 4 Shifting token error (1.1-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1443: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.1-4: ) Stack now 0 4 Shifting token error (1.1-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) | 1 + 2 * 3 + !- ++ ./calc.at:1445: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' () Reducing stack by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Stack now 0 8 Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Stack now 0 8 20 5 13 Reducing stack by rule 18 (line 109): $1 = token '!' () $2 = token '-' () Stack now 0 8 20 Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1443: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1443: cat stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' () Reducing stack by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Stack now 0 8 Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Stack now 0 8 20 5 13 Reducing stack by rule 18 (line 109): $1 = token '!' () $2 = token '-' () Stack now 0 8 20 Cleanup: popping token '+' () Cleanup: popping nterm exp (7) input: | (1 + # + 1) = 1111 ./calc.at:1443: $PREPARSER ./calc input ./calc.at:1445: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1445: cat stderr ./calc.at:1443: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: | (#) + (#) = 2222 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1445: $PREPARSER ./calc input stderr: ./calc.at:1443: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Stack now 0 4 11 Next token is token invalid token () Error: discarding token invalid token () Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 8 20 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Next token is token invalid token () Error: discarding token invalid token () Error: popping token error () Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Stack now 0 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (2222) Shifting token number (2222) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (2222) -> $$ = nterm exp (2222) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1443: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Stack now 0 4 11 Next token is token invalid token () Error: discarding token invalid token () Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 8 20 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Next token is token invalid token () Error: discarding token invalid token () Error: popping token error () Stack now 0 8 20 4 Shifting token error () Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Stack now 0 8 Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (2222) Shifting token number (2222) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (2222) -> $$ = nterm exp (2222) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () input: | (1 + 1) / (1 - 1) ./calc.at:1443: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Stack now 0 4 12 20 29 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 22 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 22 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 22 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Stack now 0 8 22 4 12 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 22 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 22 4 12 19 28 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 22 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 22 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Stack now 0 8 22 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1445: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1443: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1445: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Stack now 0 4 12 20 29 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 22 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 22 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 22 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Stack now 0 8 22 4 12 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 22 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 22 4 12 19 28 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 22 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 22 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Stack now 0 8 22 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | (1 + #) = 1111 ./calc.at:1445: $PREPARSER ./calc input ./calc.at:1443: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 4 12 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Stack now 0 4 12 Error: popping nterm exp (1) Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Next token is token invalid token () Error: discarding token invalid token () Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1443: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 4 12 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Stack now 0 4 12 Error: popping nterm exp (1) Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Next token is token invalid token () Error: discarding token invalid token () Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () 532. calc.at:1443: ok ./calc.at:1445: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1445: cat stderr input: | (# + 1) = 1111 ./calc.at:1445: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Stack now 0 4 11 Next token is token invalid token () Error: discarding token invalid token () Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' () Error: discarding token '+' () Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1) Error: discarding token number (1) Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Stack now 0 4 11 Next token is token invalid token () Error: discarding token invalid token () Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' () Error: discarding token '+' () Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1) Error: discarding token number (1) Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1445: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 535. calc.at:1448: testing Calculator C++ %header %locations api.location.file=none ... ./calc.at:1448: mv calc.y.tmp calc.y ./calc.at:1445: cat stderr ./calc.at:1448: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y input: | (1 + # + 1) = 1111 ./calc.at:1445: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 4 12 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Stack now 0 4 12 Error: popping nterm exp (1) Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Next token is token invalid token () Error: discarding token invalid token () Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' () Error: discarding token '+' () Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1) Error: discarding token number (1) Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 4 12 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Stack now 0 4 12 Error: popping nterm exp (1) Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Next token is token invalid token () Error: discarding token invalid token () Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' () Error: discarding token '+' () Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1) Error: discarding token number (1) Error: popping token error () Stack now 0 4 Shifting token error () Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' () Reducing stack by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1445: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1445: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1445: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 4 12 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Stack now 0 4 12 20 29 Reading a token Next token is token ')' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Stack now 0 4 12 Next token is token ')' () Shifting token ')' () Entering state 26 Stack now 0 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Stack now 0 8 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 8 22 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 22 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Stack now 0 8 22 4 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Stack now 0 8 22 4 12 19 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 22 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 28 Stack now 0 8 22 4 12 19 28 Reading a token Next token is token ')' () Reducing stack by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Stack now 0 8 22 4 12 Next token is token ')' () Shifting token ')' () Entering state 26 Stack now 0 8 22 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Stack now 0 8 22 31 Reading a token Next token is token '\n' () Reducing stack by rule 10 (line 93): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Stack now 0 4 12 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Stack now 0 4 12 20 29 Reading a token Next token is token ')' () Reducing stack by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Stack now 0 4 12 Next token is token ')' () Shifting token ')' () Entering state 26 Stack now 0 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Stack now 0 8 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Stack now 0 8 22 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 22 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Stack now 0 8 22 4 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Stack now 0 8 22 4 12 19 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Stack now 0 8 22 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 28 Stack now 0 8 22 4 12 19 28 Reading a token Next token is token ')' () Reducing stack by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Stack now 0 8 22 4 12 Next token is token ')' () Shifting token ')' () Entering state 26 Stack now 0 8 22 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Stack now 0 8 22 31 Reading a token Next token is token '\n' () Reducing stack by rule 10 (line 93): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Stack now 0 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Stack now 0 6 Reading a token Next token is token end of input () Shifting token end of input () Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1448: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc calc-lex.cc calc-main.cc $LIBS ./calc.at:1445: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1445: cat stderr 533. calc.at:1445: ok 536. calc.at:1449: testing Calculator C++ %header %locations api.location.file="my-location.hh" ... ./calc.at:1449: mv calc.y.tmp calc.y ./calc.at:1449: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1449: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc calc-lex.cc calc-main.cc $LIBS stderr: stdout: ./calc.at:1446: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc calc.hh input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1446: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Stack now 0 6 8 20 29 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 20 29 21 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 20 29 21 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 20 29 21 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Stack now 0 6 8 20 29 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Stack now 0 6 8 20 29 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Stack now 0 6 2 10 23 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 23 1 Reducing stack by rule 5 (line 79): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Stack now 0 6 2 10 23 32 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 79): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 19 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 19 4 1 Reducing stack by rule 5 (line 79): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 19 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Stack now 0 6 8 19 4 12 19 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 19 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Stack now 0 6 8 19 4 12 19 28 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 19 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Stack now 0 6 8 19 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Stack now 0 6 8 23 32 23 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 23 32 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Stack now 0 6 8 23 32 23 32 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Stack now 0 6 8 23 32 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 79): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Stack now 0 6 4 12 23 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Stack now 0 6 4 12 23 32 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (14.1: ) Shifting token end of input (14.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Stack now 0 6 8 20 29 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 20 29 21 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 20 29 21 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 20 29 21 2 10 Reading a token Next token is token '=' (2.12: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Stack now 0 6 8 20 29 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Stack now 0 6 8 20 29 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Stack now 0 6 2 10 23 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 23 1 Reducing stack by rule 5 (line 79): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Stack now 0 6 2 10 23 32 Reading a token Next token is token '=' (4.6: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 79): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) Reducing stack by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (5.8: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '-' (9.7: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 79): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (9.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 79): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 19 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 19 4 1 Reducing stack by rule 5 (line 79): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 19 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Stack now 0 6 8 19 4 12 19 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 19 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Stack now 0 6 8 19 4 12 19 28 Reading a token Next token is token ')' (10.11: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 19 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Stack now 0 6 8 19 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (10.13: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 79): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Stack now 0 6 8 23 32 23 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 23 32 23 1 Reducing stack by rule 5 (line 79): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Stack now 0 6 8 23 32 23 32 Reading a token Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Stack now 0 6 8 23 32 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 79): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Stack now 0 6 4 12 23 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Stack now 0 6 4 12 23 32 Reading a token Next token is token ')' (13.5: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 79): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (13.9: ) Reducing stack by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (14.1: ) Shifting token end of input (14.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1446: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1446: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1446: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1446: cat stderr input: | 1//2 ./calc.at:1446: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1446: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1446: cat stderr input: | error ./calc.at:1446: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1446: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1446: cat stderr input: | 1 = 2 = 3 ./calc.at:1446: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 18 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 18 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1446: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1446: cat stderr input: | | +1 ./calc.at:1446: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1446: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1446: cat stderr ./calc.at:1446: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token end of input (1.1: ) 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token end of input (1.1: ) Stack now 0 ./calc.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token end of input (1.1: ) 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token end of input (1.1: ) Stack now 0 ./calc.at:1446: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1446: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1446: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 20 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 20 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 20 4 12 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Stack now 0 8 20 4 12 21 30 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.17: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 20 4 Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.20: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 20 4 Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.30: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 20 4 12 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Stack now 0 8 20 4 12 21 30 Reading a token Next token is token '*' (1.39: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 20 4 Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.44: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1446: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1446: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1446: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Stack now 0 4 5 15 Reducing stack by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Stack now 0 4 5 15 Reducing stack by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 20 4 Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.14: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1446: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1446: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1446: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 20 4 Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.15: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1446: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1446: cat stderr input: | (* *) + (*) + (*) ./calc.at:1446: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 20 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 20 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 20 4 Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.13: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 20 4 Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1446: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1446: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1446: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 20 5 14 Reducing stack by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 20 5 14 Reducing stack by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1446: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1446: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 20 5 13 Reducing stack by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) Reducing stack by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 20 5 13 Reducing stack by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1446: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1446: cat stderr input: | (#) + (#) = 2222 ./calc.at:1446: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 20 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.1-8: ) Stack now 0 8 20 4 Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.1-8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 20 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token invalid token (1.8: ) Error: discarding token invalid token (1.8: ) Error: popping token error (1.1-8: ) Stack now 0 8 20 4 Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.1-8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1446: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1446: cat stderr input: | (1 + #) = 1111 ./calc.at:1446: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1446: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1446: cat stderr input: | (# + 1) = 1111 ./calc.at:1446: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.1-4: ) Stack now 0 4 Shifting token error (1.1-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Error: popping token error (1.1-2: ) Stack now 0 4 Shifting token error (1.1-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Error: popping token error (1.1-4: ) Stack now 0 4 Shifting token error (1.1-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.1-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1446: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1446: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1446: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 79): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1446: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1446: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1446: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Stack now 0 4 12 20 29 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 22 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 22 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 22 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Stack now 0 8 22 4 12 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 22 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 22 4 12 19 28 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 22 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 22 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Stack now 0 8 22 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 20 1 Reducing stack by rule 5 (line 79): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Stack now 0 4 12 20 29 Reading a token Next token is token ')' (1.7: ) Reducing stack by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Stack now 0 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 22 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 22 4 1 Reducing stack by rule 5 (line 79): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 22 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Stack now 0 8 22 4 12 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 22 4 12 19 1 Reducing stack by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 22 4 12 19 28 Reading a token Next token is token ')' (1.17: ) Reducing stack by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 22 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Stack now 0 8 22 4 12 26 Reducing stack by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Stack now 0 8 22 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of input (2.1: ) Shifting token end of input (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1446: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1446: cat stderr 534. calc.at:1446: ok 537. calc.at:1451: testing Calculator C++ %no-lines %header %locations api.location.file="my-location.hh" ... ./calc.at:1451: mv calc.y.tmp calc.y ./calc.at:1451: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1451: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc calc-lex.cc calc-main.cc $LIBS stderr: stdout: ./calc.at:1448: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc calc.hh input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1448: $PREPARSER ./calc input stderr: ./calc.at:1448: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1448: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1448: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1448: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1448: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1448: cat stderr input: | 1//2 ./calc.at:1448: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1448: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1448: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1448: cat stderr input: | error ./calc.at:1448: $PREPARSER ./calc input stderr: 1.1: syntax error ./calc.at:1448: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1448: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1448: cat stderr input: | 1 = 2 = 3 ./calc.at:1448: $PREPARSER ./calc input stderr: 1.7: syntax error ./calc.at:1448: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error ./calc.at:1448: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1448: cat stderr input: | | +1 ./calc.at:1448: $PREPARSER ./calc input stderr: 2.1: syntax error ./calc.at:1448: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error ./calc.at:1448: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: stdout: ./calc.at:1449: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc calc.hh input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1449: $PREPARSER ./calc input ./calc.at:1448: cat stderr stderr: ./calc.at:1449: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1448: $PREPARSER ./calc /dev/null stderr: stderr: ./calc.at:1449: $EGREP -c -v 'Return for a new token:|LAC:' stderr 1.1: syntax error ./calc.at:1448: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: 1.1: syntax error | 1 2 ./calc.at:1449: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1449: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1448: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.3: syntax error ./calc.at:1449: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1448: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1448: $PREPARSER ./calc input ./calc.at:1449: cat stderr stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1448: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1//2 ./calc.at:1449: $PREPARSER ./calc input stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 stderr: 1.3: syntax error ./calc.at:1449: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1448: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.3: syntax error ./calc.at:1448: cat stderr ./calc.at:1449: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (!!) + (1 2) = 1 ./calc.at:1448: $PREPARSER ./calc input stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1448: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1449: cat stderr stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 input: | error ./calc.at:1449: $PREPARSER ./calc input ./calc.at:1448: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.1: syntax error ./calc.at:1449: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1448: cat stderr ./calc.at:1449: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (- *) + (1 2) = 1 ./calc.at:1448: $PREPARSER ./calc input stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1448: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1449: cat stderr stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 input: | 1 = 2 = 3 ./calc.at:1449: $PREPARSER ./calc input ./calc.at:1448: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.7: syntax error ./calc.at:1449: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error ./calc.at:1448: cat stderr input: ./calc.at:1449: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | (* *) + (*) + (*) ./calc.at:1448: $PREPARSER ./calc input stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1448: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1449: cat stderr stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error input: | | +1 ./calc.at:1449: $PREPARSER ./calc input ./calc.at:1448: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 2.1: syntax error ./calc.at:1449: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error ./calc.at:1448: cat stderr ./calc.at:1449: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 + 2 * 3 + !+ ++ ./calc.at:1448: $PREPARSER ./calc input stderr: ./calc.at:1448: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1449: cat stderr ./calc.at:1448: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1449: $PREPARSER ./calc /dev/null input: stderr: 1.1: syntax error ./calc.at:1449: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | 1 + 2 * 3 + !- ++ ./calc.at:1448: $PREPARSER ./calc input stderr: stderr: 1.1: syntax error ./calc.at:1448: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1449: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1448: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1449: cat stderr ./calc.at:1448: cat stderr input: input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1449: $PREPARSER ./calc input | (#) + (#) = 2222 ./calc.at:1448: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' stderr: ./calc.at:1448: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1449: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1448: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1449: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1448: cat stderr ./calc.at:1449: cat stderr input: | (1 + #) = 1111 input: ./calc.at:1448: $PREPARSER ./calc input | (!!) + (1 2) = 1 ./calc.at:1449: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' stderr: ./calc.at:1448: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1449: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1448: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1449: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1448: cat stderr ./calc.at:1449: cat stderr input: | (# + 1) = 1111 ./calc.at:1448: $PREPARSER ./calc input input: | (- *) + (1 2) = 1 stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1448: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1449: $PREPARSER ./calc input stderr: stderr: 1.2: syntax error: invalid character: '#' 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1449: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1448: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1449: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1448: cat stderr input: ./calc.at:1449: cat stderr | (1 + # + 1) = 1111 ./calc.at:1448: $PREPARSER ./calc input input: stderr: 1.6: syntax error: invalid character: '#' | (* *) + (*) + (*) ./calc.at:1449: $PREPARSER ./calc input ./calc.at:1448: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error 1.6: syntax error: invalid character: '#' ./calc.at:1449: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1448: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1449: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1448: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1448: $PREPARSER ./calc input ./calc.at:1449: cat stderr stderr: 1.11-17: error: null divisor ./calc.at:1448: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor input: | 1 + 2 * 3 + !+ ++ ./calc.at:1449: $PREPARSER ./calc input ./calc.at:1448: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: ./calc.at:1449: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1449: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1448: cat stderr input: 535. calc.at:1448: ok | 1 + 2 * 3 + !- ++ ./calc.at:1449: $PREPARSER ./calc input stderr: ./calc.at:1449: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1449: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1449: cat stderr input: | (#) + (#) = 2222 ./calc.at:1449: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1449: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1449: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 538. calc.at:1453: testing Calculator C++ %locations parse.lac=full parse.error=verbose ... ./calc.at:1453: mv calc.y.tmp calc.y ./calc.at:1449: cat stderr ./calc.at:1453: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y input: | (1 + #) = 1111 ./calc.at:1449: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1449: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1449: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1449: cat stderr input: | (# + 1) = 1111 ./calc.at:1449: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1449: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1449: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1449: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1449: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1449: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1453: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS ./calc.at:1449: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1449: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1449: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1449: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1449: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1449: cat stderr 536. calc.at:1449: ok 539. calc.at:1454: testing Calculator C++ %locations parse.lac=full parse.error=detailed ... ./calc.at:1454: mv calc.y.tmp calc.y ./calc.at:1454: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1454: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: stdout: ./calc.at:1451: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc calc.hh input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1451: $PREPARSER ./calc input stderr: ./calc.at:1451: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1451: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1451: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1451: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1451: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1451: cat stderr input: | 1//2 ./calc.at:1451: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1451: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1451: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1451: cat stderr input: | error ./calc.at:1451: $PREPARSER ./calc input stderr: 1.1: syntax error ./calc.at:1451: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1451: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1451: cat stderr input: | 1 = 2 = 3 ./calc.at:1451: $PREPARSER ./calc input stderr: 1.7: syntax error ./calc.at:1451: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error ./calc.at:1451: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1451: cat stderr input: | | +1 ./calc.at:1451: $PREPARSER ./calc input stderr: 2.1: syntax error ./calc.at:1451: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error ./calc.at:1451: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1451: cat stderr ./calc.at:1451: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error ./calc.at:1451: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1451: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1451: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1451: $PREPARSER ./calc input stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1451: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1451: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1451: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1451: $PREPARSER ./calc input stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1451: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1451: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1451: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1451: $PREPARSER ./calc input stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1451: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1451: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1451: cat stderr input: | (* *) + (*) + (*) ./calc.at:1451: $PREPARSER ./calc input stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1451: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1451: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1451: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1451: $PREPARSER ./calc input stderr: ./calc.at:1451: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1451: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1451: $PREPARSER ./calc input stderr: ./calc.at:1451: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1451: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1451: cat stderr input: | (#) + (#) = 2222 ./calc.at:1451: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1451: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1451: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1451: cat stderr input: | (1 + #) = 1111 ./calc.at:1451: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1451: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1451: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1451: cat stderr input: | (# + 1) = 1111 ./calc.at:1451: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1451: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1451: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1451: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1451: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1451: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1451: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1451: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1451: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1451: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1451: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1451: cat stderr 537. calc.at:1451: ok 540. calc.at:1455: testing Calculator C++ %locations parse.lac=full parse.error=detailed parse.trace ... ./calc.at:1455: mv calc.y.tmp calc.y ./calc.at:1455: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1455: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: stdout: ./calc.at:1453: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1453: $PREPARSER ./calc input stderr: ./calc.at:1453: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1453: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1453: $PREPARSER ./calc input stderr: 1.3: syntax error, unexpected number ./calc.at:1453: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error, unexpected number ./calc.at:1453: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1453: cat stderr input: | 1//2 ./calc.at:1453: $PREPARSER ./calc input stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1453: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1453: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1453: cat stderr input: | error ./calc.at:1453: $PREPARSER ./calc input stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1453: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1453: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1453: cat stderr input: | 1 = 2 = 3 ./calc.at:1453: $PREPARSER ./calc input stderr: 1.7: syntax error, unexpected '=' ./calc.at:1453: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error, unexpected '=' ./calc.at:1453: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1453: cat stderr input: | | +1 ./calc.at:1453: $PREPARSER ./calc input stderr: 2.1: syntax error, unexpected '+' ./calc.at:1453: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error, unexpected '+' ./calc.at:1453: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1453: cat stderr ./calc.at:1453: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error, unexpected end of input ./calc.at:1453: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error, unexpected end of input ./calc.at:1453: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1453: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1453: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1453: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1453: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1453: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1453: $PREPARSER ./calc input stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 ./calc.at:1453: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 ./calc.at:1453: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1453: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1453: $PREPARSER ./calc input stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1453: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1453: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: stdout: ./calc.at:1454: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc ./calc.at:1453: cat stderr input: | (* *) + (*) + (*) ./calc.at:1453: $PREPARSER ./calc input input: stderr: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1454: $PREPARSER ./calc input 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1453: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1454: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' stderr: ./calc.at:1454: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1453: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 2 ./calc.at:1453: cat stderr ./calc.at:1454: $PREPARSER ./calc input input: stderr: | 1 + 2 * 3 + !+ ++ ./calc.at:1453: $PREPARSER ./calc input 1.3: syntax error, unexpected number ./calc.at:1454: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1453: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error, unexpected number stderr: ./calc.at:1453: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1454: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 + 2 * 3 + !- ++ ./calc.at:1453: $PREPARSER ./calc input stderr: ./calc.at:1453: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1454: cat stderr stderr: input: | 1//2 ./calc.at:1453: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1454: $PREPARSER ./calc input ./calc.at:1453: cat stderr stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1454: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (#) + (#) = 2222 ./calc.at:1453: $PREPARSER ./calc input stderr: 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1453: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1454: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1454: cat stderr ./calc.at:1453: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1453: cat stderr input: input: | (1 + #) = 1111 ./calc.at:1453: $PREPARSER ./calc input stderr: | error ./calc.at:1454: $PREPARSER ./calc input 1.6: syntax error: invalid character: '#' ./calc.at:1453: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1454: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error, unexpected invalid token ./calc.at:1453: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1454: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1453: cat stderr input: | (# + 1) = 1111 ./calc.at:1453: $PREPARSER ./calc input ./calc.at:1454: cat stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1453: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 = 2 = 3 stderr: ./calc.at:1454: $PREPARSER ./calc input 1.2: syntax error: invalid character: '#' stderr: 1.7: syntax error, unexpected '=' ./calc.at:1454: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1453: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.7: syntax error, unexpected '=' ./calc.at:1453: cat stderr input: ./calc.at:1454: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | (1 + # + 1) = 1111 ./calc.at:1453: $PREPARSER ./calc input stderr: ./calc.at:1454: cat stderr 1.6: syntax error: invalid character: '#' ./calc.at:1453: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' input: | | +1 ./calc.at:1454: $PREPARSER ./calc input stderr: ./calc.at:1453: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 2.1: syntax error, unexpected '+' ./calc.at:1454: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error, unexpected '+' ./calc.at:1453: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1453: $PREPARSER ./calc input ./calc.at:1454: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.11-17: error: null divisor ./calc.at:1453: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1454: cat stderr stderr: 1.11-17: error: null divisor ./calc.at:1454: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error, unexpected end of file ./calc.at:1453: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1454: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1453: cat stderr stderr: 1.1: syntax error, unexpected end of file 538. calc.at:1453: ok ./calc.at:1454: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1454: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1454: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1454: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.1-46: error: 4444 != 1 ./calc.at:1454: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1454: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1454: $PREPARSER ./calc input stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 ./calc.at:1454: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error, unexpected number 1.1-16: error: 2222 != 1 541. calc.at:1457: testing Calculator C++ parse.error=custom ... ./calc.at:1457: mv calc.y.tmp calc.y ./calc.at:1454: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1457: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1454: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1454: $PREPARSER ./calc input stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1454: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.12: syntax error, unexpected number 1.1-17: error: 2222 != 1 ./calc.at:1454: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1454: cat stderr ./calc.at:1457: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS input: | (* *) + (*) + (*) ./calc.at:1454: $PREPARSER ./calc input stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1454: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1454: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1454: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1454: $PREPARSER ./calc input stderr: ./calc.at:1454: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1454: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1454: $PREPARSER ./calc input stderr: ./calc.at:1454: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1454: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1454: cat stderr input: | (#) + (#) = 2222 ./calc.at:1454: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1454: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1454: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1454: cat stderr input: | (1 + #) = 1111 ./calc.at:1454: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1454: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1454: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1454: cat stderr input: | (# + 1) = 1111 ./calc.at:1454: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1454: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1454: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1454: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1454: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1454: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1454: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1454: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1454: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1454: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1454: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1454: cat stderr 539. calc.at:1454: ok 542. calc.at:1458: testing Calculator C++ parse.error=custom %locations api.prefix={calc} %parse-param {semantic_value *result}{int *count}{int *nerrs} ... ./calc.at:1458: mv calc.y.tmp calc.y ./calc.at:1458: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1458: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: stdout: ./calc.at:1455: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1455: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 92): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 92): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 92): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '=' (1.11: ) LAC: initial context established for '=' LAC: checking lookahead '=': R9 G29 R7 G8 S18 Reducing stack by rule 9 (line 105): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.14-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 92): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 92): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Stack now 0 6 8 20 29 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 20 29 21 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 20 29 21 2 1 Reducing stack by rule 5 (line 92): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 20 29 21 2 10 Reading a token Next token is token '=' (2.12: ) LAC: initial context established for '=' LAC: checking lookahead '=': R11 G30 R9 G29 R7 G8 S18 Reducing stack by rule 11 (line 115): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Stack now 0 6 8 20 29 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 105): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Stack now 0 6 8 20 29 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 103): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 92): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R11 G27 R6 G8 S24 Reducing stack by rule 11 (line 115): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 93): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 87): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 92): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Stack now 0 6 2 10 23 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 23 1 Reducing stack by rule 5 (line 92): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Stack now 0 6 2 10 23 32 Reading a token Next token is token '=' (4.6: ) LAC: initial context established for '=' LAC: checking lookahead '=': R12 G10 R11 G8 S18 Reducing stack by rule 12 (line 116): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 115): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 92): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R11 G27 R6 G8 S24 Reducing stack by rule 11 (line 115): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 93): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 92): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) LAC: initial context established for ')' LAC: checking lookahead ')': R11 G12 S26 Reducing stack by rule 11 (line 115): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) LAC: initial context discarded due to shift Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 117): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 92): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (5.8: ) LAC: initial context established for '=' LAC: checking lookahead '=': R12 G8 S18 Reducing stack by rule 12 (line 116): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (5.11-6.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 87): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 92): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) LAC: initial context established for '=' LAC: checking lookahead '=': R11 G10 R11 G10 R11 G8 S18 Reducing stack by rule 11 (line 115): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 115): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 115): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 92): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R11 G27 R6 G8 S24 Reducing stack by rule 11 (line 115): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 93): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 87): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 92): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 92): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '-' (9.7: ) LAC: initial context established for '-' LAC: checking lookahead '-': R8 G8 S19 Reducing stack by rule 8 (line 104): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) LAC: initial context discarded due to shift Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 92): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (9.11: ) LAC: initial context established for '=' LAC: checking lookahead '=': R8 G8 S18 Reducing stack by rule 8 (line 104): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 92): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R11 G27 R6 G8 S24 Reducing stack by rule 11 (line 115): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 93): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 92): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 19 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 19 4 1 Reducing stack by rule 5 (line 92): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 19 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Stack now 0 6 8 19 4 12 19 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 19 4 12 19 1 Reducing stack by rule 5 (line 92): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Stack now 0 6 8 19 4 12 19 28 Reading a token Next token is token ')' (10.11: ) LAC: initial context established for ')' LAC: checking lookahead ')': R8 G12 S26 Reducing stack by rule 8 (line 104): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 19 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) LAC: initial context discarded due to shift Entering state 26 Stack now 0 6 8 19 4 12 26 Reducing stack by rule 13 (line 117): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (10.13: ) LAC: initial context established for '=' LAC: checking lookahead '=': R8 G8 S18 Reducing stack by rule 8 (line 104): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (10.16-11.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 87): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 92): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 92): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Stack now 0 6 8 23 32 23 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 23 32 23 1 Reducing stack by rule 5 (line 92): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Stack now 0 6 8 23 32 23 32 Reading a token Next token is token '=' (12.7: ) LAC: initial context established for '=' LAC: checking lookahead '=': R12 G32 R12 G8 S18 Reducing stack by rule 12 (line 116): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Stack now 0 6 8 23 32 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 116): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (12.12-13.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 92): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Stack now 0 6 4 12 23 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 23 1 Reducing stack by rule 5 (line 92): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Stack now 0 6 4 12 23 32 Reading a token Next token is token ')' (13.5: ) LAC: initial context established for ')' LAC: checking lookahead ')': R12 G12 S26 Reducing stack by rule 12 (line 116): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) LAC: initial context discarded due to shift Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 117): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 92): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (13.9: ) LAC: initial context established for '=' LAC: checking lookahead '=': R12 G8 S18 Reducing stack by rule 12 (line 116): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (13.13-14.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (14.1: ) Shifting token end of file (14.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1455: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 92): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 92): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 92): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '=' (1.11: ) LAC: initial context established for '=' LAC: checking lookahead '=': R9 G29 R7 G8 S18 Reducing stack by rule 9 (line 105): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '=' (1.11: ) Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.14-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Stack now 0 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 92): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Stack now 0 6 8 20 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Stack now 0 6 8 20 1 Reducing stack by rule 5 (line 92): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Stack now 0 6 8 20 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Stack now 0 6 8 20 29 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Stack now 0 6 8 20 29 21 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Stack now 0 6 8 20 29 21 2 1 Reducing stack by rule 5 (line 92): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Stack now 0 6 8 20 29 21 2 10 Reading a token Next token is token '=' (2.12: ) LAC: initial context established for '=' LAC: checking lookahead '=': R11 G30 R9 G29 R7 G8 S18 Reducing stack by rule 11 (line 115): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Stack now 0 6 8 20 29 21 30 Next token is token '=' (2.12: ) Reducing stack by rule 9 (line 105): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Stack now 0 6 8 20 29 Next token is token '=' (2.12: ) Reducing stack by rule 7 (line 103): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Stack now 0 6 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 92): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (2.16-3.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R11 G27 R6 G8 S24 Reducing stack by rule 11 (line 115): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (2.16-3.0: ) Reducing stack by rule 6 (line 93): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Stack now 0 6 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 87): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Stack now 0 6 2 1 Reducing stack by rule 5 (line 92): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Stack now 0 6 2 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Stack now 0 6 2 10 23 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Stack now 0 6 2 10 23 1 Reducing stack by rule 5 (line 92): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Stack now 0 6 2 10 23 32 Reading a token Next token is token '=' (4.6: ) LAC: initial context established for '=' LAC: checking lookahead '=': R12 G10 R11 G8 S18 Reducing stack by rule 12 (line 116): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (4.6: ) Reducing stack by rule 11 (line 115): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 92): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (4.10-5.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R11 G27 R6 G8 S24 Reducing stack by rule 11 (line 115): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (4.10-5.0: ) Reducing stack by rule 6 (line 93): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Stack now 0 6 4 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Stack now 0 6 4 2 1 Reducing stack by rule 5 (line 92): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Stack now 0 6 4 2 10 Reading a token Next token is token ')' (5.4: ) LAC: initial context established for ')' LAC: checking lookahead ')': R11 G12 S26 Reducing stack by rule 11 (line 115): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) LAC: initial context discarded due to shift Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 117): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 92): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (5.8: ) LAC: initial context established for '=' LAC: checking lookahead '=': R12 G8 S18 Reducing stack by rule 12 (line 116): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Stack now 0 6 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (5.11-6.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 87): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Stack now 0 6 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Stack now 0 6 2 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Stack now 0 6 2 2 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Stack now 0 6 2 2 2 1 Reducing stack by rule 5 (line 92): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Stack now 0 6 2 2 2 10 Reading a token Next token is token '=' (7.6: ) LAC: initial context established for '=' LAC: checking lookahead '=': R11 G10 R11 G10 R11 G8 S18 Reducing stack by rule 11 (line 115): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Stack now 0 6 2 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 115): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Stack now 0 6 2 10 Next token is token '=' (7.6: ) Reducing stack by rule 11 (line 115): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Stack now 0 6 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 92): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (7.10-8.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R11 G27 R6 G8 S24 Reducing stack by rule 11 (line 115): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (7.10-8.0: ) Reducing stack by rule 6 (line 93): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Stack now 0 6 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 87): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 92): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 92): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '-' (9.7: ) LAC: initial context established for '-' LAC: checking lookahead '-': R8 G8 S19 Reducing stack by rule 8 (line 104): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Stack now 0 6 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) LAC: initial context discarded due to shift Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Stack now 0 6 8 19 1 Reducing stack by rule 5 (line 92): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (9.11: ) LAC: initial context established for '=' LAC: checking lookahead '=': R8 G8 S18 Reducing stack by rule 8 (line 104): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Stack now 0 6 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Stack now 0 6 8 18 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Stack now 0 6 8 18 2 1 Reducing stack by rule 5 (line 92): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Stack now 0 6 8 18 2 10 Reading a token Next token is token '\n' (9.15-10.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R11 G27 R6 G8 S24 Reducing stack by rule 11 (line 115): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Stack now 0 6 8 18 27 Next token is token '\n' (9.15-10.0: ) Reducing stack by rule 6 (line 93): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Stack now 0 6 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 92): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Stack now 0 6 8 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Stack now 0 6 8 19 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Stack now 0 6 8 19 4 1 Reducing stack by rule 5 (line 92): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Stack now 0 6 8 19 4 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Stack now 0 6 8 19 4 12 19 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Stack now 0 6 8 19 4 12 19 1 Reducing stack by rule 5 (line 92): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Stack now 0 6 8 19 4 12 19 28 Reading a token Next token is token ')' (10.11: ) LAC: initial context established for ')' LAC: checking lookahead ')': R8 G12 S26 Reducing stack by rule 8 (line 104): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Stack now 0 6 8 19 4 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) LAC: initial context discarded due to shift Entering state 26 Stack now 0 6 8 19 4 12 26 Reducing stack by rule 13 (line 117): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Stack now 0 6 8 19 28 Reading a token Next token is token '=' (10.13: ) LAC: initial context established for '=' LAC: checking lookahead '=': R8 G8 S18 Reducing stack by rule 8 (line 104): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Stack now 0 6 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (10.16-11.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Stack now 0 6 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Stack now 0 6 3 Reducing stack by rule 3 (line 87): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Stack now 0 6 1 Reducing stack by rule 5 (line 92): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 92): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Stack now 0 6 8 23 32 23 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Stack now 0 6 8 23 32 23 1 Reducing stack by rule 5 (line 92): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Stack now 0 6 8 23 32 23 32 Reading a token Next token is token '=' (12.7: ) LAC: initial context established for '=' LAC: checking lookahead '=': R12 G32 R12 G8 S18 Reducing stack by rule 12 (line 116): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Stack now 0 6 8 23 32 Next token is token '=' (12.7: ) Reducing stack by rule 12 (line 116): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Stack now 0 6 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (12.12-13.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Stack now 0 6 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Stack now 0 6 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Stack now 0 6 4 1 Reducing stack by rule 5 (line 92): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Stack now 0 6 4 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Stack now 0 6 4 12 23 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Stack now 0 6 4 12 23 1 Reducing stack by rule 5 (line 92): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Stack now 0 6 4 12 23 32 Reading a token Next token is token ')' (13.5: ) LAC: initial context established for ')' LAC: checking lookahead ')': R12 G12 S26 Reducing stack by rule 12 (line 116): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Stack now 0 6 4 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) LAC: initial context discarded due to shift Entering state 26 Stack now 0 6 4 12 26 Reducing stack by rule 13 (line 117): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Stack now 0 6 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Stack now 0 6 8 23 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Stack now 0 6 8 23 1 Reducing stack by rule 5 (line 92): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Stack now 0 6 8 23 32 Reading a token Next token is token '=' (13.9: ) LAC: initial context established for '=' LAC: checking lookahead '=': R12 G8 S18 Reducing stack by rule 12 (line 116): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Stack now 0 6 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 6 8 18 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Stack now 0 6 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Stack now 0 6 8 18 27 Reading a token Next token is token '\n' (13.13-14.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Stack now 0 6 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 6 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Stack now 0 6 17 Reducing stack by rule 2 (line 83): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (14.1: ) Shifting token end of file (14.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1455: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1455: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 92): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) LAC: initial context established for number LAC: checking lookahead number: Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: Err LAC: checking lookahead '=': S18 LAC: checking lookahead '-': S19 LAC: checking lookahead '+': S20 LAC: checking lookahead '*': S21 LAC: checking lookahead '/': S22 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1455: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 92): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token number (1.3: 2) LAC: initial context established for number LAC: checking lookahead number: Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: Err LAC: checking lookahead '=': S18 LAC: checking lookahead '-': S19 LAC: checking lookahead '+': S20 LAC: checking lookahead '*': S21 LAC: checking lookahead '/': S22 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token number (1.3: 2) Stack now 0 ./calc.at:1455: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1455: cat stderr input: | 1//2 ./calc.at:1455: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 92): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '/' (1.3: ) LAC: initial context established for '/' LAC: checking lookahead '/': Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': Err LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1455: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 92): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '/' (1.3: ) LAC: initial context established for '/' LAC: checking lookahead '/': Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': Err LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '/' (1.3: ) Stack now 0 ./calc.at:1455: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1455: cat stderr input: | error ./calc.at:1455: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) LAC: initial context established for invalid token LAC: checking lookahead invalid token: Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': S3 LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1455: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token invalid token (1.1: ) LAC: initial context established for invalid token LAC: checking lookahead invalid token: Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': S3 LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) Stack now 0 ./calc.at:1455: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1455: cat stderr input: | 1 = 2 = 3 ./calc.at:1455: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 92): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '=' (1.7: ) LAC: checking lookahead '=': Err LAC: checking lookahead end of file: R6 G8 Err LAC: checking lookahead number: R6 G8 Err LAC: checking lookahead '=': Err LAC: checking lookahead '-': S19 LAC: checking lookahead '+': S20 LAC: checking lookahead '*': S21 LAC: checking lookahead '/': S22 LAC: checking lookahead NEG: R6 G8 Err LAC: checking lookahead '^': S23 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 18 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1455: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 92): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '=' (1.7: ) LAC: checking lookahead '=': Err LAC: checking lookahead end of file: R6 G8 Err LAC: checking lookahead number: R6 G8 Err LAC: checking lookahead '=': Err LAC: checking lookahead '-': S19 LAC: checking lookahead '+': S20 LAC: checking lookahead '*': S21 LAC: checking lookahead '/': S22 LAC: checking lookahead NEG: R6 G8 Err LAC: checking lookahead '^': S23 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Stack now 0 8 18 Error: popping token '=' (1.3: ) Stack now 0 8 Error: popping nterm exp (1.1: 1) Stack now 0 Cleanup: discarding lookahead token '=' (1.7: ) Stack now 0 ./calc.at:1455: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1455: cat stderr input: | | +1 ./calc.at:1455: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 87): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) LAC: initial context established for '+' LAC: checking lookahead '+': Err LAC: checking lookahead end of file: S16 LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': S3 LAC: checking lookahead '(': S4 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1455: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Stack now 0 3 Reducing stack by rule 3 (line 87): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token '+' (2.1: ) LAC: initial context established for '+' LAC: checking lookahead '+': Err LAC: checking lookahead end of file: S16 LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': S3 LAC: checking lookahead '(': S4 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Stack now 0 Cleanup: discarding lookahead token '+' (2.1: ) Stack now 0 ./calc.at:1455: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1455: cat stderr ./calc.at:1455: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token end of file (1.1: ) LAC: initial context established for end of file LAC: checking lookahead end of file: Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': S3 LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.1: syntax error, unexpected end of file Cleanup: discarding lookahead token end of file (1.1: ) Stack now 0 ./calc.at:1455: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token end of file (1.1: ) LAC: initial context established for end of file LAC: checking lookahead end of file: Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': S3 LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.1: syntax error, unexpected end of file Cleanup: discarding lookahead token end of file (1.1: ) Stack now 0 stderr: stdout: ./calc.at:1457: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc ./calc.at:1455: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1457: $PREPARSER ./calc input ./calc.at:1455: cat stderr stderr: ./calc.at:1457: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 stderr: ./calc.at:1455: $PREPARSER ./calc input ./calc.at:1457: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 2 ./calc.at:1457: $PREPARSER ./calc input stderr: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1457: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) LAC: initial context established for ')' LAC: checking lookahead ')': Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': Err LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' LAC: initial context discarded due to error recovery Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 92): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 92): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.13: ) LAC: initial context established for '+' LAC: checking lookahead '+': R7 G12 S20 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) LAC: initial context discarded due to shift Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 92): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.17: ) LAC: initial context established for '+' LAC: checking lookahead '+': R7 G12 S20 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) LAC: initial context discarded due to shift Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token ')' (1.18: ) LAC: initial context established for ')' LAC: checking lookahead ')': Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': Err LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.20: ) LAC: initial context established for '+' LAC: checking lookahead '+': R7 G8 S20 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) LAC: initial context discarded due to shift Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.23: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': Err LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' LAC: initial context discarded due to error recovery Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.23: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.25: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.27: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.30: ) LAC: initial context established for '+' LAC: checking lookahead '+': R7 G8 S20 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) LAC: initial context discarded due to shift Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 92): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 20 4 12 21 1 Reducing stack by rule 5 (line 92): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Stack now 0 8 20 4 12 21 30 Reading a token Next token is token '*' (1.39: ) LAC: initial context established for '*' LAC: checking lookahead '*': R9 G12 S21 Reducing stack by rule 9 (line 105): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) LAC: initial context discarded due to shift Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token '*' (1.41: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': Err LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.41: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.44: ) LAC: initial context established for '=' LAC: checking lookahead '=': R7 G8 S18 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.47-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (2.1: ) Shifting token end of file (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1455: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1457: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token ')' (1.2: ) LAC: initial context established for ')' LAC: checking lookahead ')': Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': Err LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' LAC: initial context discarded due to error recovery Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 92): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 92): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.13: ) LAC: initial context established for '+' LAC: checking lookahead '+': R7 G12 S20 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) LAC: initial context discarded due to shift Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Stack now 0 8 20 4 12 20 1 Reducing stack by rule 5 (line 92): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Stack now 0 8 20 4 12 20 29 Reading a token Next token is token '+' (1.17: ) LAC: initial context established for '+' LAC: checking lookahead '+': R7 G12 S20 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Stack now 0 8 20 4 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) LAC: initial context discarded due to shift Entering state 20 Stack now 0 8 20 4 12 20 Reading a token Next token is token ')' (1.18: ) LAC: initial context established for ')' LAC: checking lookahead ')': Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': Err LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.7-15: 3) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.7-18: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.20: ) LAC: initial context established for '+' LAC: checking lookahead '+': R7 G8 S20 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) LAC: initial context discarded due to shift Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.23: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': Err LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' LAC: initial context discarded due to error recovery Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.23: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err Error: discarding token '*' (1.23: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.23: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.25: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err Error: discarding token '*' (1.25: ) Error: popping token error (1.23: ) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.23-25: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token '*' (1.27: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err Error: discarding token '*' (1.27: ) Error: popping token error (1.23-25: ) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.23-27: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.30: ) LAC: initial context established for '+' LAC: checking lookahead '+': R7 G8 S20 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Stack now 0 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) LAC: initial context discarded due to shift Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 92): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Stack now 0 8 20 4 12 21 1 Reducing stack by rule 5 (line 92): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Stack now 0 8 20 4 12 21 30 Reading a token Next token is token '*' (1.39: ) LAC: initial context established for '*' LAC: checking lookahead '*': R9 G12 S21 Reducing stack by rule 9 (line 105): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Stack now 0 8 20 4 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) LAC: initial context discarded due to shift Entering state 21 Stack now 0 8 20 4 12 21 Reading a token Next token is token '*' (1.41: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': Err LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Stack now 0 8 20 4 12 Error: popping nterm exp (1.33-37: 2) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.41: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err Error: discarding token '*' (1.41: ) Error: popping token error (1.33-41: ) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.33-41: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.44: ) LAC: initial context established for '=' LAC: checking lookahead '=': R7 G8 S18 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Stack now 0 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.47-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Stack now 0 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (2.1: ) Shifting token end of file (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | 1//2 ./calc.at:1455: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1457: $PREPARSER ./calc input stderr: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1457: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1455: cat stderr stderr: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) input: | (!!) + (1 2) = 1 ./calc.at:1455: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Stack now 0 4 5 15 Reducing stack by rule 16 (line 120): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 92): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.11: 2) LAC: initial context established for number LAC: checking lookahead number: Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: Err LAC: checking lookahead '=': S18 LAC: checking lookahead '-': S19 LAC: checking lookahead '+': S20 LAC: checking lookahead '*': S21 LAC: checking lookahead '/': S22 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.11: 2) LAC: initial context established for number LAC: checking lookahead number: Err Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.14: ) LAC: initial context established for '=' LAC: checking lookahead '=': R7 G8 S18 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (2.1: ) Shifting token end of file (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1455: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1457: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Stack now 0 4 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Stack now 0 4 5 15 Reducing stack by rule 16 (line 120): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Stack now 0 4 Shifting token error (1.2-3: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 92): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.11: 2) LAC: initial context established for number LAC: checking lookahead number: Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: Err LAC: checking lookahead '=': S18 LAC: checking lookahead '-': S19 LAC: checking lookahead '+': S20 LAC: checking lookahead '*': S21 LAC: checking lookahead '/': S22 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.11: 2) LAC: initial context established for number LAC: checking lookahead number: Err Error: discarding token number (1.11: 2) Error: popping token error (1.9-11: ) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.9-11: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.14: ) LAC: initial context established for '=' LAC: checking lookahead '=': R7 G8 S18 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (2.1: ) Shifting token end of file (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | error ./calc.at:1457: $PREPARSER ./calc input stderr: ./calc.at:1455: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1457: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1455: cat stderr syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) input: | (- *) + (1 2) = 1 ./calc.at:1455: $PREPARSER ./calc input ./calc.at:1457: cat stderr input: | 1 = 2 = 3 stderr: ./calc.at:1457: $PREPARSER ./calc input Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': Err LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' LAC: initial context discarded due to error recovery Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 119): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 92): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.12: 2) LAC: initial context established for number LAC: checking lookahead number: Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: Err LAC: checking lookahead '=': S18 LAC: checking lookahead '-': S19 LAC: checking lookahead '+': S20 LAC: checking lookahead '*': S21 LAC: checking lookahead '/': S22 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.12: 2) LAC: initial context established for number LAC: checking lookahead number: Err Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.15: ) LAC: initial context established for '=' LAC: checking lookahead '=': R7 G8 S18 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.18-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (2.1: ) Shifting token end of file (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1455: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) stderr: ./calc.at:1457: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Stack now 0 4 2 Reading a token Next token is token '*' (1.4: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': Err LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' LAC: initial context discarded due to error recovery Shifting token error (1.4: ) Entering state 9 Stack now 0 4 2 9 Reducing stack by rule 15 (line 119): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Stack now 0 4 Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.4: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err Error: discarding token '*' (1.4: ) Error: popping token error (1.2-4: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Stack now 0 8 20 4 1 Reducing stack by rule 5 (line 92): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Stack now 0 8 20 4 12 Reading a token Next token is token number (1.12: 2) LAC: initial context established for number LAC: checking lookahead number: Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: Err LAC: checking lookahead '=': S18 LAC: checking lookahead '-': S19 LAC: checking lookahead '+': S20 LAC: checking lookahead '*': S21 LAC: checking lookahead '/': S22 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token number (1.12: 2) LAC: initial context established for number LAC: checking lookahead number: Err Error: discarding token number (1.12: 2) Error: popping token error (1.10-12: ) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.10-12: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.15: ) LAC: initial context established for '=' LAC: checking lookahead '=': R7 G8 S18 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.18-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (2.1: ) Shifting token end of file (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) ./calc.at:1455: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1455: cat stderr ./calc.at:1457: cat stderr input: | (* *) + (*) + (*) ./calc.at:1455: $PREPARSER ./calc input input: | | +1 ./calc.at:1457: $PREPARSER ./calc input stderr: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1457: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': Err LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' LAC: initial context discarded due to error recovery Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.10: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': Err LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' LAC: initial context discarded due to error recovery Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.10: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.13: ) LAC: initial context established for '+' LAC: checking lookahead '+': R7 G8 S20 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) LAC: initial context discarded due to shift Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.16: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': Err LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' LAC: initial context discarded due to error recovery Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.16: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '\n' (1.18-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R7 G8 S24 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (2.1: ) Shifting token end of file (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1455: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token '*' (1.2: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': Err LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' LAC: initial context discarded due to error recovery Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Next token is token '*' (1.2: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err Error: discarding token '*' (1.2: ) Error: popping token error (1.2: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '*' (1.4: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err Error: discarding token '*' (1.4: ) Error: popping token error (1.2: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.2-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.10: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': Err LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' LAC: initial context discarded due to error recovery Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.10: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err Error: discarding token '*' (1.10: ) Error: popping token error (1.10: ) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.10: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '+' (1.13: ) LAC: initial context established for '+' LAC: checking lookahead '+': R7 G8 S20 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Stack now 0 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) LAC: initial context discarded due to shift Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Stack now 0 8 20 4 Reading a token Next token is token '*' (1.16: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err LAC: checking lookahead end of file: Err LAC: checking lookahead number: S1 LAC: checking lookahead '=': Err LAC: checking lookahead '-': S2 LAC: checking lookahead '+': Err LAC: checking lookahead '*': Err LAC: checking lookahead '/': Err LAC: checking lookahead NEG: Err LAC: checking lookahead '^': Err LAC: checking lookahead '\n': Err LAC: checking lookahead '(': S4 LAC: checking lookahead ')': Err LAC: checking lookahead '!': S5 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' LAC: initial context discarded due to error recovery Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token '*' (1.16: ) LAC: initial context established for '*' LAC: checking lookahead '*': Err Error: discarding token '*' (1.16: ) Error: popping token error (1.16: ) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.16: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '\n' (1.18-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R7 G8 S24 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (2.1: ) Shifting token end of file (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1457: cat stderr ./calc.at:1455: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1457: $PREPARSER ./calc /dev/null ./calc.at:1455: cat stderr stderr: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1457: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 + 2 * 3 + !+ ++ stderr: ./calc.at:1455: $PREPARSER ./calc input syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 92): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 92): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 92): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) LAC: initial context established for '+' LAC: checking lookahead '+': R9 G29 R7 G8 S20 Reducing stack by rule 9 (line 105): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) LAC: initial context discarded due to shift Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 20 5 14 Reducing stack by rule 17 (line 121): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1455: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 92): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 92): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 92): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) LAC: initial context established for '+' LAC: checking lookahead '+': R9 G29 R7 G8 S20 Reducing stack by rule 9 (line 105): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) LAC: initial context discarded due to shift Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Stack now 0 8 20 5 14 Reducing stack by rule 17 (line 121): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1455: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1457: cat stderr input: input: | 1 + 2 * 3 + !- ++ ./calc.at:1455: $PREPARSER ./calc input | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1457: $PREPARSER ./calc input stderr: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) error: 4444 != 1 ./calc.at:1457: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 92): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 92): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 92): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) LAC: initial context established for '+' LAC: checking lookahead '+': R9 G29 R7 G8 S20 Reducing stack by rule 9 (line 105): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) LAC: initial context discarded due to shift Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 20 5 13 Reducing stack by rule 18 (line 122): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1455: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) error: 4444 != 1 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Stack now 0 1 Reducing stack by rule 5 (line 92): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Stack now 0 8 20 1 Reducing stack by rule 5 (line 92): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Stack now 0 8 20 29 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Stack now 0 8 20 29 21 1 Reducing stack by rule 5 (line 92): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Stack now 0 8 20 29 21 30 Reading a token Next token is token '+' (1.11: ) LAC: initial context established for '+' LAC: checking lookahead '+': R9 G29 R7 G8 S20 Reducing stack by rule 9 (line 105): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Stack now 0 8 20 29 Next token is token '+' (1.11: ) Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Stack now 0 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) LAC: initial context discarded due to shift Entering state 20 Stack now 0 8 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Stack now 0 8 20 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Stack now 0 8 20 5 13 Reducing stack by rule 18 (line 122): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Stack now 0 8 20 Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1455: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1457: cat stderr ./calc.at:1455: cat stderr input: input: | (!!) + (1 2) = 1 ./calc.at:1457: $PREPARSER ./calc input | (#) + (#) = 2222 ./calc.at:1455: $PREPARSER ./calc input stderr: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) error: 2222 != 1 ./calc.at:1457: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) error: 2222 != 1 Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) LAC: initial context established for invalid token LAC: checking lookahead invalid token: Err Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.1: ) $2 = token error (1.1-2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 20 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token invalid token (1.8: ) LAC: initial context established for invalid token LAC: checking lookahead invalid token: Err Error: discarding token invalid token (1.8: ) Error: popping token error (1.1-8: ) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.7: ) $2 = token error (1.1-8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.11: ) LAC: initial context established for '=' LAC: checking lookahead '=': R7 G8 S18 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (2.1: ) Shifting token end of file (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1455: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) LAC: initial context established for invalid token LAC: checking lookahead invalid token: Err Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.1: ) $2 = token error (1.1-2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Stack now 0 8 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Stack now 0 8 20 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Next token is token invalid token (1.8: ) LAC: initial context established for invalid token LAC: checking lookahead invalid token: Err Error: discarding token invalid token (1.8: ) Error: popping token error (1.1-8: ) Stack now 0 8 20 4 LAC: initial context discarded due to error recovery Shifting token error (1.1-8: ) Entering state 11 Stack now 0 8 20 4 11 Reading a token Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Stack now 0 8 20 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.7: ) $2 = token error (1.1-8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Stack now 0 8 20 29 Reading a token Next token is token '=' (1.11: ) LAC: initial context established for '=' LAC: checking lookahead '=': R7 G8 S18 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Stack now 0 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) LAC: initial context discarded due to shift Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.17-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Stack now 0 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (2.1: ) Shifting token end of file (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1457: cat stderr ./calc.at:1455: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (- *) + (1 2) = 1 ./calc.at:1457: $PREPARSER ./calc input ./calc.at:1455: cat stderr stderr: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) error: 2222 != 1 ./calc.at:1457: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (1 + #) = 1111 ./calc.at:1455: $PREPARSER ./calc input stderr: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) error: 2222 != 1 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 92): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) LAC: initial context established for invalid token LAC: checking lookahead invalid token: Err Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (2.1: ) Shifting token end of file (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1455: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1457: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 92): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) LAC: initial context established for invalid token LAC: checking lookahead invalid token: Err Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (2.1: ) Shifting token end of file (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | (* *) + (*) + (*) ./calc.at:1457: $PREPARSER ./calc input ./calc.at:1455: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1457: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1455: cat stderr stderr: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) input: | (# + 1) = 1111 ./calc.at:1455: $PREPARSER ./calc input ./calc.at:1457: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) LAC: initial context established for invalid token LAC: checking lookahead invalid token: Err Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) LAC: initial context established for '+' LAC: checking lookahead '+': Err Error: discarding token '+' (1.4: ) Error: popping token error (1.1-2: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.1-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) LAC: initial context established for number LAC: checking lookahead number: Err Error: discarding token number (1.6: 1) Error: popping token error (1.1-4: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.1-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.1: ) $2 = token error (1.1-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (2.1: ) Shifting token end of file (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1455: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1457: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.2: ) LAC: initial context established for invalid token LAC: checking lookahead invalid token: Err Error: discarding token invalid token (1.2: ) Error: popping token error (1.1-2: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.1-2: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.4: ) LAC: initial context established for '+' LAC: checking lookahead '+': Err Error: discarding token '+' (1.4: ) Error: popping token error (1.1-2: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.1-4: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.6: 1) LAC: initial context established for number LAC: checking lookahead number: Err Error: discarding token number (1.6: 1) Error: popping token error (1.1-4: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.1-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.1: ) $2 = token error (1.1-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.15-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (2.1: ) Shifting token end of file (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: ./calc.at:1457: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1457: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1455: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1455: cat stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1457: $PREPARSER ./calc input stderr: ./calc.at:1457: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (1 + # + 1) = 1111 ./calc.at:1455: $PREPARSER ./calc input stderr: stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 92): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) LAC: initial context established for invalid token LAC: checking lookahead invalid token: Err Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) LAC: initial context established for '+' LAC: checking lookahead '+': Err Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) LAC: initial context established for number LAC: checking lookahead number: Err Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.19-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (2.1: ) Shifting token end of file (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1455: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 92): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Stack now 0 4 12 Error: popping nterm exp (1.2: 1) Stack now 0 4 Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Next token is token invalid token (1.6: ) LAC: initial context established for invalid token LAC: checking lookahead invalid token: Err Error: discarding token invalid token (1.6: ) Error: popping token error (1.2-6: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.2-6: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token '+' (1.8: ) LAC: initial context established for '+' LAC: checking lookahead '+': Err Error: discarding token '+' (1.8: ) Error: popping token error (1.2-6: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.2-8: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token number (1.10: 1) LAC: initial context established for number LAC: checking lookahead number: Err Error: discarding token number (1.10: 1) Error: popping token error (1.2-8: ) Stack now 0 4 LAC: initial context discarded due to error recovery Shifting token error (1.2-10: ) Entering state 11 Stack now 0 4 11 Reading a token Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Stack now 0 4 11 25 Reducing stack by rule 14 (line 118): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Stack now 0 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Stack now 0 8 18 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Stack now 0 8 18 1 Reducing stack by rule 5 (line 92): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Stack now 0 8 18 27 Reading a token Next token is token '\n' (1.19-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R6 G8 S24 Reducing stack by rule 6 (line 93): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Stack now 0 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (2.1: ) Shifting token end of file (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1457: cat stderr input: | (#) + (#) = 2222 ./calc.at:1457: $PREPARSER ./calc input ./calc.at:1455: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1457: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1455: cat stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' input: | (1 + 1) / (1 - 1) ./calc.at:1455: $PREPARSER ./calc input ./calc.at:1457: cat stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 92): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 20 1 Reducing stack by rule 5 (line 92): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Stack now 0 4 12 20 29 Reading a token Next token is token ')' (1.7: ) LAC: initial context established for ')' LAC: checking lookahead ')': R7 G12 S26 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) LAC: initial context discarded due to shift Entering state 26 Stack now 0 4 12 26 Reducing stack by rule 13 (line 117): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 22 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 22 4 1 Reducing stack by rule 5 (line 92): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 22 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Stack now 0 8 22 4 12 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 22 4 12 19 1 Reducing stack by rule 5 (line 92): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 22 4 12 19 28 Reading a token Next token is token ')' (1.17: ) LAC: initial context established for ')' LAC: checking lookahead ')': R8 G12 S26 Reducing stack by rule 8 (line 104): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 22 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) LAC: initial context discarded due to shift Entering state 26 Stack now 0 8 22 4 12 26 Reducing stack by rule 13 (line 117): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Stack now 0 8 22 31 Reading a token Next token is token '\n' (1.18-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R10 G8 S24 Reducing stack by rule 10 (line 106): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (2.1: ) Shifting token end of file (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1455: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (1 + #) = 1111 stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Stack now 0 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Stack now 0 4 1 Reducing stack by rule 5 (line 92): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Stack now 0 4 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Stack now 0 4 12 20 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Stack now 0 4 12 20 1 Reducing stack by rule 5 (line 92): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Stack now 0 4 12 20 29 Reading a token Next token is token ')' (1.7: ) LAC: initial context established for ')' LAC: checking lookahead ')': R7 G12 S26 Reducing stack by rule 7 (line 103): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Stack now 0 4 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) LAC: initial context discarded due to shift Entering state 26 Stack now 0 4 12 26 Reducing stack by rule 13 (line 117): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Stack now 0 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Stack now 0 8 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Stack now 0 8 22 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Stack now 0 8 22 4 1 Reducing stack by rule 5 (line 92): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Stack now 0 8 22 4 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Stack now 0 8 22 4 12 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Stack now 0 8 22 4 12 19 1 Reducing stack by rule 5 (line 92): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Stack now 0 8 22 4 12 19 28 Reading a token Next token is token ')' (1.17: ) LAC: initial context established for ')' LAC: checking lookahead ')': R8 G12 S26 Reducing stack by rule 8 (line 104): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Stack now 0 8 22 4 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) LAC: initial context discarded due to shift Entering state 26 Stack now 0 8 22 4 12 26 Reducing stack by rule 13 (line 117): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Stack now 0 8 22 31 Reading a token Next token is token '\n' (1.18-2.0: ) LAC: initial context established for '\n' LAC: checking lookahead '\n': R10 G8 S24 Reducing stack by rule 10 (line 106): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Stack now 0 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) LAC: initial context discarded due to shift Entering state 24 Stack now 0 8 24 Reducing stack by rule 4 (line 88): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Stack now 0 7 Reducing stack by rule 1 (line 82): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Stack now 0 6 Reading a token Next token is token end of file (2.1: ) Shifting token end of file (2.1: ) Entering state 16 Stack now 0 6 16 Stack now 0 6 16 Cleanup: popping token end of file (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1457: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1457: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1455: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error: invalid character: '#' ./calc.at:1455: cat stderr 540. calc.at:1455: ok ./calc.at:1457: cat stderr stderr: stdout: input: ./calc.at:1458: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc | (# + 1) = 1111 ./calc.at:1457: $PREPARSER ./calc input input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 stderr: syntax error: invalid character: '#' ./calc.at:1458: $PREPARSER ./calc input ./calc.at:1457: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: syntax error: invalid character: '#' ./calc.at:1458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1458: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1457: cat stderr input: | 1 2 ./calc.at:1458: $PREPARSER ./calc input input: | (1 + # + 1) = 1111 ./calc.at:1457: $PREPARSER ./calc input stderr: 1.3: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) stderr: syntax error: invalid character: '#' ./calc.at:1458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1457: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: 1.3: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) syntax error: invalid character: '#' ./calc.at:1458: cat stderr input: ./calc.at:1457: cat stderr | 1//2 ./calc.at:1458: $PREPARSER ./calc input stderr: input: | (1 + 1) / (1 - 1) 1.3: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1457: $PREPARSER ./calc input stderr: ./calc.at:1458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr error: null divisor ./calc.at:1457: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) stderr: error: null divisor ./calc.at:1458: cat stderr 543. calc.at:1459: testing Calculator C++ parse.error=custom %locations api.prefix={calc} %parse-param {semantic_value *result}{int *count}{int *nerrs} parse.lac=full ... ./calc.at:1459: mv calc.y.tmp calc.y input: ./calc.at:1459: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y | error ./calc.at:1458: $PREPARSER ./calc input ./calc.at:1457: cat stderr stderr: 1.1: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 541. calc.at:1457: ok stderr: 1.1: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1458: cat stderr input: | 1 = 2 = 3 ./calc.at:1458: $PREPARSER ./calc input stderr: 1.7: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) ./calc.at:1458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) ./calc.at:1458: cat stderr input: | | +1 ./calc.at:1458: $PREPARSER ./calc input stderr: 2.1: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1458: cat stderr ./calc.at:1458: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1458: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1458: $PREPARSER ./calc input stderr: 1.2: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.18: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.23: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.41: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.1-46: error: 4444 != 1 ./calc.at:1458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 544. calc.at:1468: testing Calculator glr.cc ... ./calc.at:1468: mv calc.y.tmp calc.y stderr: 1.2: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.18: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.23: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.41: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.1-46: error: 4444 != 1 ./calc.at:1468: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1459: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS ./calc.at:1458: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1458: $PREPARSER ./calc input stderr: 1.11: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-16: error: 2222 != 1 ./calc.at:1458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-16: error: 2222 != 1 ./calc.at:1458: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1458: $PREPARSER ./calc input stderr: 1.4: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.12: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-17: error: 2222 != 1 ./calc.at:1458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.12: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-17: error: 2222 != 1 ./calc.at:1458: cat stderr input: | (* *) + (*) + (*) ./calc.at:1458: $PREPARSER ./calc input stderr: 1.2: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.10: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.16: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.10: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.16: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1458: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1458: $PREPARSER ./calc input stderr: ./calc.at:1458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1458: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1458: $PREPARSER ./calc input stderr: ./calc.at:1468: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS ./calc.at:1458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1458: cat stderr input: | (#) + (#) = 2222 ./calc.at:1458: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1458: cat stderr input: | (1 + #) = 1111 ./calc.at:1458: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1458: cat stderr input: | (# + 1) = 1111 ./calc.at:1458: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1458: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1458: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1458: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1458: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1458: cat stderr 542. calc.at:1458: ok 545. calc.at:1469: testing Calculator glr2.cc ... ./calc.at:1469: mv calc.y.tmp calc.y ./calc.at:1469: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1469: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: stdout: ./calc.at:1468: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1468: $PREPARSER ./calc input stderr: ./calc.at:1468: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 2 ./calc.at:1468: $PREPARSER ./calc input stderr: syntax error ./calc.at:1468: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1468: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1468: cat stderr input: | 1//2 ./calc.at:1468: $PREPARSER ./calc input stderr: syntax error ./calc.at:1468: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1468: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1468: cat stderr stderr: stdout: input: | error ./calc.at:1468: $PREPARSER ./calc input ./calc.at:1459: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc stderr: syntax error ./calc.at:1468: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: syntax error | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1459: $PREPARSER ./calc input stderr: ./calc.at:1459: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1459: $EGREP -c -v 'Return for a new token:|LAC:' stderr ./calc.at:1468: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 2 ./calc.at:1459: $PREPARSER ./calc input stderr: 1.3: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1459: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1468: cat stderr stderr: 1.3: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1459: cat stderr input: input: | 1//2 | 1 = 2 = 3 ./calc.at:1468: $PREPARSER ./calc input ./calc.at:1459: $PREPARSER ./calc input stderr: stderr: 1.3: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) syntax error ./calc.at:1468: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1459: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error stderr: 1.3: syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1459: cat stderr input: | error ./calc.at:1459: $PREPARSER ./calc input ./calc.at:1468: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.1: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1459: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1468: cat stderr stderr: 1.1: syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) input: | | +1 ./calc.at:1468: $PREPARSER ./calc input ./calc.at:1459: cat stderr stderr: syntax error ./calc.at:1468: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 = 2 = 3 ./calc.at:1459: $PREPARSER ./calc input stderr: syntax error stderr: 1.7: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1459: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) ./calc.at:1468: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1459: cat stderr input: ./calc.at:1468: cat stderr | | +1 ./calc.at:1459: $PREPARSER ./calc input ./calc.at:1468: $PREPARSER ./calc /dev/null stderr: syntax error ./calc.at:1468: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error stderr: 2.1: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1459: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1468: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1459: cat stderr ./calc.at:1459: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) ./calc.at:1468: cat stderr ./calc.at:1459: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: 1.1: syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1468: $PREPARSER ./calc input stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1468: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1459: cat stderr stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1459: $PREPARSER ./calc input stderr: 1.2: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.18: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.23: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.41: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.1-46: error: 4444 != 1 ./calc.at:1459: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1468: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.2: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.18: syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) 1.23: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.41: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.1-46: error: 4444 != 1 ./calc.at:1468: cat stderr ./calc.at:1459: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1468: $PREPARSER ./calc input stderr: input: syntax error error: 2222 != 1 | (!!) + (1 2) = 1 ./calc.at:1468: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1459: $PREPARSER ./calc input stderr: syntax error error: 2222 != 1 stderr: 1.11: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-16: error: 2222 != 1 ./calc.at:1459: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-16: error: 2222 != 1 ./calc.at:1468: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1459: cat stderr input: ./calc.at:1468: cat stderr | (- *) + (1 2) = 1 ./calc.at:1459: $PREPARSER ./calc input input: | (- *) + (1 2) = 1 ./calc.at:1468: $PREPARSER ./calc input stderr: 1.4: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.12: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-17: error: 2222 != 1 ./calc.at:1459: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: 1.4: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.12: syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) 1.1-17: error: 2222 != 1 syntax error syntax error error: 2222 != 1 ./calc.at:1468: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1459: cat stderr stderr: input: syntax error syntax error error: 2222 != 1 | (* *) + (*) + (*) ./calc.at:1459: $PREPARSER ./calc input stderr: 1.2: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.10: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.16: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1459: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.10: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) 1.16: syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) ./calc.at:1468: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1459: cat stderr ./calc.at:1468: cat stderr input: | 1 + 2 * 3 + !+ ++ input: ./calc.at:1459: $PREPARSER ./calc input | (* *) + (*) + (*) ./calc.at:1468: $PREPARSER ./calc input stderr: stderr: syntax error syntax error syntax error ./calc.at:1459: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1468: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error stderr: ./calc.at:1459: $EGREP -c -v 'Return for a new token:|LAC:' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1459: $PREPARSER ./calc input ./calc.at:1468: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: ./calc.at:1459: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1468: cat stderr ./calc.at:1459: cat stderr input: input: | 1 + 2 * 3 + !+ ++ ./calc.at:1468: $PREPARSER ./calc input | (#) + (#) = 2222 ./calc.at:1459: $PREPARSER ./calc input stderr: ./calc.at:1468: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1459: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1468: $PREPARSER ./calc input stderr: ./calc.at:1468: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' stderr: ./calc.at:1459: cat stderr input: | (1 + #) = 1111 ./calc.at:1459: $PREPARSER ./calc input ./calc.at:1468: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1459: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1468: cat stderr input: | (#) + (#) = 2222 ./calc.at:1468: $PREPARSER ./calc input ./calc.at:1459: cat stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1468: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: | (# + 1) = 1111 syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1459: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1459: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1468: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1459: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1459: $PREPARSER ./calc input ./calc.at:1468: cat stderr stderr: 1.6: syntax error: invalid character: '#' input: ./calc.at:1459: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | (1 + #) = 1111 ./calc.at:1468: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' stderr: syntax error: invalid character: '#' ./calc.at:1468: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1459: cat stderr syntax error: invalid character: '#' input: | (1 + 1) / (1 - 1) ./calc.at:1459: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1468: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1459: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1468: cat stderr stderr: 1.11-17: error: null divisor input: | (# + 1) = 1111 ./calc.at:1468: $PREPARSER ./calc input ./calc.at:1459: cat stderr stderr: syntax error: invalid character: '#' ./calc.at:1468: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 543. calc.at:1459: ok stderr: syntax error: invalid character: '#' ./calc.at:1468: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1468: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1468: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1468: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1468: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 546. calc.at:1476: testing Calculator C++ %glr-parser ... ./calc.at:1476: mv calc.y.tmp calc.y ./calc.at:1476: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1468: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1468: $PREPARSER ./calc input stderr: error: null divisor ./calc.at:1468: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: error: null divisor ./calc.at:1468: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1468: cat stderr 544. calc.at:1468: ok ./calc.at:1476: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS 547. calc.at:1476: testing Calculator glr2.cc ... ./calc.at:1476: mv calc.y.tmp calc.y ./calc.at:1476: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1476: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from calc.cc:94: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2167:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at calc.cc:2006:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2429:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2167:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:1992:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at calc.cc:2942:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2167:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:1992:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2925:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at calc.cc:2913:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./calc.at:1469: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1469: $PREPARSER ./calc input stderr: ./calc.at:1469: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 2 ./calc.at:1469: $PREPARSER ./calc input stderr: syntax error ./calc.at:1469: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1469: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1469: cat stderr input: | 1//2 ./calc.at:1469: $PREPARSER ./calc input stderr: syntax error ./calc.at:1469: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1469: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1469: cat stderr input: | error ./calc.at:1469: $PREPARSER ./calc input stderr: syntax error ./calc.at:1469: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1469: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1469: cat stderr input: | 1 = 2 = 3 ./calc.at:1469: $PREPARSER ./calc input stderr: syntax error ./calc.at:1469: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1469: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1469: cat stderr input: | | +1 ./calc.at:1469: $PREPARSER ./calc input stderr: syntax error ./calc.at:1469: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1469: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1469: cat stderr ./calc.at:1469: $PREPARSER ./calc /dev/null stderr: syntax error ./calc.at:1469: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1469: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1469: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1469: $PREPARSER ./calc input stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1469: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1469: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1469: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1469: $PREPARSER ./calc input stderr: syntax error error: 2222 != 1 ./calc.at:1469: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error error: 2222 != 1 ./calc.at:1469: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1469: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1469: $PREPARSER ./calc input stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1469: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1469: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1469: cat stderr input: | (* *) + (*) + (*) ./calc.at:1469: $PREPARSER ./calc input stderr: syntax error syntax error syntax error ./calc.at:1469: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error ./calc.at:1469: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1469: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1469: $PREPARSER ./calc input stderr: ./calc.at:1469: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1469: $PREPARSER ./calc input stderr: ./calc.at:1469: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1469: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1469: cat stderr input: | (#) + (#) = 2222 ./calc.at:1469: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1469: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1469: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1469: cat stderr input: | (1 + #) = 1111 ./calc.at:1469: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1469: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1469: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: stdout: ./calc.at:1476: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc ./calc.at:1469: cat stderr input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1476: $PREPARSER ./calc input input: | (# + 1) = 1111 ./calc.at:1469: $PREPARSER ./calc input stderr: ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: syntax error: invalid character: '#' ./calc.at:1469: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: syntax error: invalid character: '#' | 1 2 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1469: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 syntax error ./calc.at:1469: cat stderr ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (1 + # + 1) = 1111 ./calc.at:1469: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1469: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1476: cat stderr input: stderr: | 1//2 syntax error: invalid character: '#' ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1469: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error ./calc.at:1469: cat stderr ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (1 + 1) / (1 - 1) ./calc.at:1469: $PREPARSER ./calc input stderr: error: null divisor ./calc.at:1469: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1476: cat stderr error: null divisor input: | error ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1469: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 syntax error ./calc.at:1469: cat stderr 545. calc.at:1469: ok ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | 1 = 2 = 3 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | | +1 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 548. calc.at:1477: testing Calculator C++ %glr-parser %locations ... ./calc.at:1477: mv calc.y.tmp calc.y syntax error ./calc.at:1477: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr ./calc.at:1476: $PREPARSER ./calc /dev/null stderr: syntax error ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1477: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error error: 2222 != 1 ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error error: 2222 != 1 ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | (* *) + (*) + (*) ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error syntax error syntax error ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1476: $PREPARSER ./calc input stderr: ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1476: $PREPARSER ./calc input stderr: ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | (#) + (#) = 2222 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | (1 + #) = 1111 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | (# + 1) = 1111 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1476: $PREPARSER ./calc input stderr: error: null divisor ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: error: null divisor ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr 546. calc.at:1476: ok 549. calc.at:1477: testing Calculator glr2.cc %locations ... ./calc.at:1477: mv calc.y.tmp calc.y ./calc.at:1477: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1477: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from calc.cc:94: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2167:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at calc.cc:2006:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2429:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2167:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:1992:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at calc.cc:2942:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2167:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:1992:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2925:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at calc.cc:2913:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./calc.at:1476: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1476: $PREPARSER ./calc input stderr: ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 2 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | 1//2 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | error ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | 1 = 2 = 3 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | | +1 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr ./calc.at:1476: $PREPARSER ./calc /dev/null stderr: syntax error ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error syntax error error: 4444 != 1 ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error error: 2222 != 1 ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error error: 2222 != 1 ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error error: 2222 != 1 ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | (* *) + (*) + (*) ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error syntax error syntax error ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error syntax error syntax error ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1476: $PREPARSER ./calc input stderr: ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1476: $PREPARSER ./calc input stderr: ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | (#) + (#) = 2222 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | (1 + #) = 1111 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | (# + 1) = 1111 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1476: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1476: $PREPARSER ./calc input stderr: error: null divisor ./calc.at:1476: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: error: null divisor ./calc.at:1476: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1476: cat stderr 547. calc.at:1476: ok stderr: stdout: ./calc.at:1477: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1477: $PREPARSER ./calc input stderr: ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 2 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | 1//2 ./calc.at:1477: $PREPARSER ./calc input 550. calc.at:1478: testing Calculator C++ %glr-parser %locations api.location.type={Span} ... ./calc.at:1478: mv calc.y.tmp calc.y stderr: ./calc.at:1478: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y 1.3: syntax error ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | error ./calc.at:1477: $PREPARSER ./calc input stderr: 1.1: syntax error ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | 1 = 2 = 3 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.7: syntax error ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | | +1 ./calc.at:1477: $PREPARSER ./calc input stderr: 2.1: syntax error ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr ./calc.at:1477: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS ./calc.at:1477: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | (* *) + (*) + (*) ./calc.at:1477: $PREPARSER ./calc input stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1477: $PREPARSER ./calc input stderr: ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1477: $PREPARSER ./calc input stderr: ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | (#) + (#) = 2222 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | (1 + #) = 1111 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | (# + 1) = 1111 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1477: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr 548. calc.at:1477: ok 551. calc.at:1478: testing Calculator glr2.cc %locations api.location.type={Span} ... ./calc.at:1478: mv calc.y.tmp calc.y ./calc.at:1478: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1478: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: stdout: ./calc.at:1478: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1478: $PREPARSER ./calc input stderr: ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 2 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | 1//2 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | error ./calc.at:1478: $PREPARSER ./calc input stderr: 1.1: syntax error ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | 1 = 2 = 3 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.7: syntax error ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | | +1 ./calc.at:1478: $PREPARSER ./calc input stderr: 2.1: syntax error ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr ./calc.at:1478: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | (* *) + (*) + (*) ./calc.at:1478: $PREPARSER ./calc input stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1478: $PREPARSER ./calc input stderr: ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1478: $PREPARSER ./calc input stderr: ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | (#) + (#) = 2222 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | (1 + #) = 1111 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | (# + 1) = 1111 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1478: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr 550. calc.at:1478: ok 552. calc.at:1479: testing Calculator C++ %glr-parser %header parse.error=verbose %name-prefix "calc" %verbose ... ./calc.at:1479: mv calc.y.tmp calc.y ./calc.at:1479: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y stderr: In file included from /usr/include/c++/12/vector:70, from calc.cc:94: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2465:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at calc.cc:2301:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2731:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2465:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:2287:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&, const yy::parser::location_type&)' at calc.cc:3265:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2465:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:2287:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:3248:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at calc.cc:3236:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./calc.at:1477: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1477: $PREPARSER ./calc input stderr: ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 2 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc calc-lex.cc calc-main.cc $LIBS ./calc.at:1477: cat stderr input: | 1//2 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | error ./calc.at:1477: $PREPARSER ./calc input stderr: 1.1: syntax error ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | 1 = 2 = 3 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.7: syntax error ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | | +1 ./calc.at:1477: $PREPARSER ./calc input stderr: 2.1: syntax error ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr ./calc.at:1477: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | (* *) + (*) + (*) ./calc.at:1477: $PREPARSER ./calc input stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1477: $PREPARSER ./calc input stderr: ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1477: $PREPARSER ./calc input stderr: ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | (#) + (#) = 2222 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | (1 + #) = 1111 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | (# + 1) = 1111 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1477: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1477: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1477: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1477: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1477: cat stderr 549. calc.at:1477: ok 553. calc.at:1479: testing Calculator glr2.cc %header parse.error=verbose %name-prefix "calc" %verbose ... ./calc.at:1479: mv calc.y.tmp calc.y ./calc.at:1479: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1479: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc calc-lex.cc calc-main.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from calc.cc:122: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2252:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at calc.cc:2088:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2528:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2252:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:2074:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&, const yy::parser::location_type&)' at calc.cc:3062:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2252:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:2074:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:3045:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at calc.cc:3033:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./calc.at:1478: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1478: $PREPARSER ./calc input stderr: ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 2 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | 1//2 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.3: syntax error ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.3: syntax error ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | error ./calc.at:1478: $PREPARSER ./calc input stderr: 1.1: syntax error ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | 1 = 2 = 3 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.7: syntax error ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.7: syntax error ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | | +1 ./calc.at:1478: $PREPARSER ./calc input stderr: 2.1: syntax error ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 2.1: syntax error ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr ./calc.at:1478: $PREPARSER ./calc /dev/null stderr: 1.1: syntax error ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.1: syntax error ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.18: syntax error 1.23: syntax error 1.41: syntax error 1.1-46: error: 4444 != 1 ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11: syntax error 1.1-16: error: 2222 != 1 ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.4: syntax error 1.12: syntax error 1.1-17: error: 2222 != 1 ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | (* *) + (*) + (*) ./calc.at:1478: $PREPARSER ./calc input stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error 1.10: syntax error 1.16: syntax error ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1478: $PREPARSER ./calc input stderr: ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1478: $PREPARSER ./calc input stderr: ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | (#) + (#) = 2222 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' 1.8: syntax error: invalid character: '#' ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | (1 + #) = 1111 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | (# + 1) = 1111 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.2: syntax error: invalid character: '#' ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1478: $PREPARSER ./calc input stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.6: syntax error: invalid character: '#' ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1478: $PREPARSER ./calc input stderr: 1.11-17: error: null divisor ./calc.at:1478: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: 1.11-17: error: null divisor ./calc.at:1478: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1478: cat stderr 551. calc.at:1478: ok 554. calc.at:1480: testing Calculator C++ %glr-parser parse.error=verbose api.prefix={calc} %verbose ... ./calc.at:1480: mv calc.y.tmp calc.y ./calc.at:1480: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1480: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: stdout: ./calc.at:1479: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc calc.hh input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1479: $PREPARSER ./calc input stderr: ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 2 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error, unexpected number ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected number ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | 1//2 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | error ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error, unexpected invalid token ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected invalid token ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | 1 = 2 = 3 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error, unexpected '=' ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '=' ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | | +1 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error, unexpected '+' ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '+' ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr ./calc.at:1479: $PREPARSER ./calc /dev/null stderr: syntax error, unexpected end of input ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected end of input ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' error: 4444 != 1 ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' error: 4444 != 1 ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error, unexpected number error: 2222 != 1 ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected number error: 2222 != 1 ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected number error: 2222 != 1 ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected number error: 2222 != 1 ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | (* *) + (*) + (*) ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1479: $PREPARSER ./calc input stderr: ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1479: $PREPARSER ./calc input stderr: ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | (#) + (#) = 2222 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | (1 + #) = 1111 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | (# + 1) = 1111 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1479: $PREPARSER ./calc input stderr: error: null divisor ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: error: null divisor ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr 552. calc.at:1479: ok 555. calc.at:1480: testing Calculator glr2.cc parse.error=verbose api.prefix={calc} %verbose ... ./calc.at:1480: mv calc.y.tmp calc.y ./calc.at:1480: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1480: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: stdout: ./calc.at:1480: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1480: $PREPARSER ./calc input stderr: ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 2 ./calc.at:1480: $PREPARSER ./calc input stderr: syntax error, unexpected number ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected number ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr input: | 1//2 ./calc.at:1480: $PREPARSER ./calc input stderr: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr input: | error ./calc.at:1480: $PREPARSER ./calc input stderr: syntax error, unexpected invalid token ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected invalid token ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr input: | 1 = 2 = 3 ./calc.at:1480: $PREPARSER ./calc input stderr: syntax error, unexpected '=' ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '=' ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr input: | | +1 ./calc.at:1480: $PREPARSER ./calc input stderr: syntax error, unexpected '+' ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '+' ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr ./calc.at:1480: $PREPARSER ./calc /dev/null stderr: syntax error, unexpected end of input ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected end of input ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1480: $PREPARSER ./calc input stderr: syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' error: 4444 != 1 ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' error: 4444 != 1 ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1480: $PREPARSER ./calc input stderr: syntax error, unexpected number error: 2222 != 1 ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected number error: 2222 != 1 ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1480: $PREPARSER ./calc input stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected number error: 2222 != 1 ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected number error: 2222 != 1 ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr input: | (* *) + (*) + (*) ./calc.at:1480: $PREPARSER ./calc input stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1480: $PREPARSER ./calc input stderr: ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1480: $PREPARSER ./calc input stderr: ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr input: | (#) + (#) = 2222 ./calc.at:1480: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr input: | (1 + #) = 1111 ./calc.at:1480: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr input: | (# + 1) = 1111 ./calc.at:1480: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1480: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1480: $PREPARSER ./calc input stderr: error: null divisor ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: error: null divisor ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr 554. calc.at:1480: ok 556. calc.at:1482: testing Calculator C++ %glr-parser %debug ... ./calc.at:1482: mv calc.y.tmp calc.y ./calc.at:1482: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1482: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from calc.hh:59, from calc.cc:80: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:1779:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at calc.cc:1618:66, inlined from 'void calc::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, calc::parser::glr_state*, calc::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2044:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:1779:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:1604:58, inlined from 'void calc::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const calc::parser::value_type&)' at calc.cc:2558:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:1779:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:1604:58, inlined from 'void calc::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, calc::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2541:58, inlined from 'YYRESULTTAG calc::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at calc.cc:2529:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./calc.at:1479: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc calc.hh input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1479: $PREPARSER ./calc input stderr: ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 2 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error, unexpected number ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected number ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | 1//2 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | error ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error, unexpected invalid token ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected invalid token ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | 1 = 2 = 3 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error, unexpected '=' ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '=' ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | | +1 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error, unexpected '+' ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '+' ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr ./calc.at:1479: $PREPARSER ./calc /dev/null stderr: syntax error, unexpected end of input ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected end of input ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' error: 4444 != 1 ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' error: 4444 != 1 ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error, unexpected number error: 2222 != 1 ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected number error: 2222 != 1 ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected number error: 2222 != 1 ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected number error: 2222 != 1 ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | (* *) + (*) + (*) ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1479: $PREPARSER ./calc input stderr: ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | 1 + 2 * 3 + !- ++ ./calc.at:1479: $PREPARSER ./calc input stderr: ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | (#) + (#) = 2222 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | (1 + #) = 1111 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | (# + 1) = 1111 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1479: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1479: $PREPARSER ./calc input stderr: error: null divisor ./calc.at:1479: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: error: null divisor ./calc.at:1479: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1479: cat stderr 553. calc.at:1479: ok 557. calc.at:1482: testing Calculator glr2.cc %debug ... ./calc.at:1482: mv calc.y.tmp calc.y ./calc.at:1482: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1482: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: stdout: ./calc.at:1482: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1482: $PREPARSER ./calc input stderr: In file included from /usr/include/c++/12/vector:70, from calc.cc:98: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2198:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:2023:58, inlined from 'void calc::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const calc::parser::value_type&)' at calc.cc:2977:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2198:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at calc.cc:2037:66, inlined from 'void calc::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, calc::parser::glr_state*, calc::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2463:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2198:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:2023:58, inlined from 'void calc::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, calc::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2960:58, inlined from 'YYRESULTTAG calc::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at calc.cc:2948:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./calc.at:1480: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (7) Shifting token "number" (7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7) -> $$ = nterm exp (7) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (5) Shifting token "number" (5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5) -> $$ = nterm exp (5) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token ')' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 28 Reading a token Next token is token '-' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (4) Shifting token "number" (4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4) -> $$ = nterm exp (4) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (256) Shifting token "number" (256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (256) -> $$ = nterm exp (256) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token ')' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (64) Shifting token "number" (64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (64) -> $$ = nterm exp (64) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1480: $PREPARSER ./calc input stderr: ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (7) Shifting token "number" (7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7) -> $$ = nterm exp (7) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (5) Shifting token "number" (5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5) -> $$ = nterm exp (5) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token ')' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 28 Reading a token Next token is token '-' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (4) Shifting token "number" (4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4) -> $$ = nterm exp (4) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (256) Shifting token "number" (256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (256) -> $$ = nterm exp (256) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token ')' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (64) Shifting token "number" (64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (64) -> $$ = nterm exp (64) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () input: input: | 1 2 ./calc.at:1482: $PREPARSER ./calc input | 1 2 ./calc.at:1480: $PREPARSER ./calc input stderr: stderr: syntax error, unexpected number ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token "number" (2) syntax error Error: popping nterm exp (1) Cleanup: discarding lookahead token "number" (2) ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected number stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token "number" (2) syntax error Error: popping nterm exp (1) Cleanup: discarding lookahead token "number" (2) ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr input: | 1//2 ./calc.at:1480: $PREPARSER ./calc input stderr: ./calc.at:1482: cat stderr syntax error, unexpected '/', expecting number or '-' or '(' or '!' ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: syntax error, unexpected '/', expecting number or '-' or '(' or '!' | 1//2 ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '/' () syntax error Error: popping token '/' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '/' () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '/' () syntax error Error: popping token '/' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '/' () ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | error ./calc.at:1480: $PREPARSER ./calc input stderr: syntax error, unexpected invalid token ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected invalid token ./calc.at:1482: cat stderr input: | error ./calc.at:1482: $PREPARSER ./calc input ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" () syntax error Cleanup: discarding lookahead token "invalid token" () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1480: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" () syntax error Cleanup: discarding lookahead token "invalid token" () input: | 1 = 2 = 3 ./calc.at:1480: $PREPARSER ./calc input stderr: syntax error, unexpected '=' ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '=' ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr input: | 1 = 2 = 3 ./calc.at:1482: $PREPARSER ./calc input stderr: ./calc.at:1480: cat stderr Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '=' () syntax error Error: popping nterm exp (2) Error: popping token '=' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '=' () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | | +1 stderr: ./calc.at:1480: $PREPARSER ./calc input Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '=' () syntax error Error: popping nterm exp (2) Error: popping token '=' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '=' () stderr: syntax error, unexpected '+' ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 syntax error, unexpected '+' ./calc.at:1482: cat stderr input: ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | | +1 ./calc.at:1482: $PREPARSER ./calc input ./calc.at:1480: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '+' () syntax error Error: popping nterm input () Cleanup: discarding lookahead token '+' () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1480: $PREPARSER ./calc /dev/null stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '+' () syntax error Error: popping nterm input () Cleanup: discarding lookahead token '+' () syntax error, unexpected end of input ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected end of input ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr ./calc.at:1482: cat stderr ./calc.at:1482: $PREPARSER ./calc /dev/null input: stderr: Starting parse Entering state 0 Reading a token Now at end of input. syntax error Cleanup: discarding lookahead token "end of input" () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1480: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Now at end of input. syntax error Cleanup: discarding lookahead token "end of input" () stderr: syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' error: 4444 != 1 ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected ')', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' error: 4444 != 1 ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token ')' () syntax error Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token ')' () syntax error Error: popping token '+' () Error: popping nterm exp (3) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 30 Reading a token Next token is token '*' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '*' () syntax error Error: popping token '*' () Error: popping nterm exp (2) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (!!) + (1 2) = 1 ./calc.at:1480: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token ')' () syntax error Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token ')' () syntax error Error: popping token '+' () Error: popping nterm exp (3) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 30 Reading a token Next token is token '*' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '*' () syntax error Error: popping token '*' () Error: popping nterm exp (2) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () stderr: syntax error, unexpected number error: 2222 != 1 ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected number error: 2222 != 1 ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: ./calc.at:1480: cat stderr | (!!) + (1 2) = 1 ./calc.at:1482: $PREPARSER ./calc input stderr: input: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' () $2 = token '!' () Shifting token error () Entering state 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token "number" (2) syntax error Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token "number" (2) Error: discarding token "number" (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | (- *) + (1 2) = 1 ./calc.at:1480: $PREPARSER ./calc input stderr: stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected number error: 2222 != 1 Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' () $2 = token '!' () Shifting token error () Entering state 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token "number" (2) syntax error Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token "number" (2) Error: discarding token "number" (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected number error: 2222 != 1 ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr ./calc.at:1480: cat stderr input: input: | (- *) + (1 2) = 1 ./calc.at:1482: $PREPARSER ./calc input | (* *) + (*) + (*) ./calc.at:1480: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' () $2 = token error () Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token "number" (2) syntax error Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token "number" (2) Error: discarding token "number" (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' () $2 = token error () Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token "number" (2) syntax error Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token "number" (2) Error: discarding token "number" (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' syntax error, unexpected '*', expecting number or '-' or '(' or '!' ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr ./calc.at:1482: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1480: $PREPARSER ./calc input input: | (* *) + (*) + (*) ./calc.at:1482: $PREPARSER ./calc input stderr: ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '\n' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | 1 + 2 * 3 + !- ++ ./calc.at:1480: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '\n' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () stderr: ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr ./calc.at:1482: cat stderr input: input: | 1 + 2 * 3 + !+ ++ ./calc.at:1482: $PREPARSER ./calc input | (#) + (#) = 2222 ./calc.at:1480: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' syntax error: invalid character: '#' ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' () $2 = token '+' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) stderr: ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr syntax error: invalid character: '#' syntax error: invalid character: '#' stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' () $2 = token '+' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) input: | 1 + 2 * 3 + !- ++ ./calc.at:1482: $PREPARSER ./calc input ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' () $2 = token '-' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' () $2 = token '-' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1480: cat stderr input: | (1 + #) = 1111 ./calc.at:1480: $PREPARSER ./calc input ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error: invalid character: '#' ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: syntax error: invalid character: '#' ./calc.at:1482: cat stderr input: | (#) + (#) = 2222 ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: $PREPARSER ./calc input ./calc.at:1480: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2222) Shifting token "number" (2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2222) -> $$ = nterm exp (2222) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2222) Shifting token "number" (2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2222) -> $$ = nterm exp (2222) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () input: | (# + 1) = 1111 ./calc.at:1480: $PREPARSER ./calc input stderr: syntax error: invalid character: '#' ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error: invalid character: '#' ./calc.at:1482: cat stderr input: ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | (1 + #) = 1111 ./calc.at:1482: $PREPARSER ./calc input stderr: ./calc.at:1480: cat stderr Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (1 + # + 1) = 1111 ./calc.at:1480: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () stderr: syntax error: invalid character: '#' ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: syntax error: invalid character: '#' ./calc.at:1482: cat stderr input: ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | (# + 1) = 1111 ./calc.at:1482: $PREPARSER ./calc input ./calc.at:1480: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token "number" (1) Error: discarding token "number" (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: | (1 + 1) / (1 - 1) ./calc.at:1480: $PREPARSER ./calc input stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token "number" (1) Error: discarding token "number" (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () error: null divisor ./calc.at:1480: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: error: null divisor ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr ./calc.at:1480: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1480: cat stderr 555. calc.at:1480: input: | (1 + # + 1) = 1111 ./calc.at:1482: $PREPARSER ./calc input ok stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token "number" (1) Error: discarding token "number" (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token "number" (1) Error: discarding token "number" (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token ')' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Reading a token Next token is token '\n' () Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token ')' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Reading a token Next token is token '\n' () Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 558. calc.at:1485: testing Calculator C++ %glr-parser parse.error=detailed %debug %name-prefix "calc" %verbose ... ./calc.at:1485: mv calc.y.tmp calc.y ./calc.at:1482: cat stderr ./calc.at:1485: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y 556. calc.at:1482: ok 559. calc.at:1485: testing Calculator glr2.cc parse.error=detailed %debug %name-prefix "calc" %verbose ... ./calc.at:1485: mv calc.y.tmp calc.y ./calc.at:1485: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1485: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS ./calc.at:1485: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: stdout: ./calc.at:1485: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '=' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (7) Shifting token number (7) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (7) -> $$ = nterm exp (7) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Next token is token '=' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (5) Shifting token number (5) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (5) -> $$ = nterm exp (5) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token ')' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 28 Reading a token Next token is token '-' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (4) Shifting token number (4) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (4) -> $$ = nterm exp (4) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (256) Shifting token number (256) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (256) -> $$ = nterm exp (256) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token ')' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (64) Shifting token number (64) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (64) -> $$ = nterm exp (64) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '=' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (7) Shifting token number (7) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (7) -> $$ = nterm exp (7) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Next token is token '=' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (5) Shifting token number (5) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (5) -> $$ = nterm exp (5) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token ')' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 28 Reading a token Next token is token '-' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (4) Shifting token number (4) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (4) -> $$ = nterm exp (4) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (256) Shifting token number (256) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (256) -> $$ = nterm exp (256) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token ')' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (64) Shifting token number (64) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (64) -> $$ = nterm exp (64) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () input: | 1 2 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Cleanup: discarding lookahead token number (2) ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Cleanup: discarding lookahead token number (2) ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | 1//2 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '/' () syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '/' () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '/' () syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '/' () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | error ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token invalid token () syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token invalid token () syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | 1 = 2 = 3 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '=' () syntax error, unexpected '=' Error: popping nterm exp (2) Error: popping token '=' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '=' () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '=' () syntax error, unexpected '=' Error: popping nterm exp (2) Error: popping token '=' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '=' () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | | +1 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '+' () syntax error, unexpected '+' Error: popping nterm input () Cleanup: discarding lookahead token '+' () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '+' () syntax error, unexpected '+' Error: popping nterm input () Cleanup: discarding lookahead token '+' () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr ./calc.at:1485: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Reading a token Now at end of input. syntax error, unexpected end of file Cleanup: discarding lookahead token end of file () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. syntax error, unexpected end of file Cleanup: discarding lookahead token end of file () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' () Error: popping nterm exp (3) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 30 Reading a token Next token is token '*' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' () Error: popping nterm exp (2) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' () Error: popping nterm exp (3) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 30 Reading a token Next token is token '*' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' () Error: popping nterm exp (2) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Reducing stack 0 by rule 16 (line 120): $1 = token '!' () $2 = token '!' () Shifting token error () Entering state 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Reducing stack 0 by rule 16 (line 120): $1 = token '!' () $2 = token '!' () Shifting token error () Entering state 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 9 Reducing stack 0 by rule 15 (line 119): $1 = token '-' () $2 = token error () Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 9 Reducing stack 0 by rule 15 (line 119): $1 = token '-' () $2 = token error () Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | (* *) + (*) + (*) ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '\n' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '\n' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Reducing stack 0 by rule 17 (line 121): $1 = token '!' () $2 = token '+' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Reducing stack 0 by rule 17 (line 121): $1 = token '!' () $2 = token '+' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) input: | 1 + 2 * 3 + !- ++ ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Reducing stack 0 by rule 18 (line 122): $1 = token '!' () $2 = token '-' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Reducing stack 0 by rule 18 (line 122): $1 = token '!' () $2 = token '-' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | (#) + (#) = 2222 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2222) Shifting token number (2222) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2222) -> $$ = nterm exp (2222) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2222) Shifting token number (2222) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2222) -> $$ = nterm exp (2222) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | (1 + #) = 1111 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | (# + 1) = 1111 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token ')' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Reading a token Next token is token '\n' () Reducing stack 0 by rule 10 (line 106): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token ')' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Reading a token Next token is token '\n' () Reducing stack 0 by rule 10 (line 106): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr 558. calc.at:1485: ok 560. calc.at:1486: testing Calculator C++ %glr-parser parse.error=verbose %debug %name-prefix "calc" %verbose ... ./calc.at:1486: mv calc.y.tmp calc.y ./calc.at:1486: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1486: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from calc.cc:94: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2167:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at calc.cc:2006:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2429:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2167:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:1992:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at calc.cc:2950:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2167:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:1992:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2933:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at calc.cc:2921:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./calc.at:1482: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (7) Shifting token "number" (7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7) -> $$ = nterm exp (7) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (5) Shifting token "number" (5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5) -> $$ = nterm exp (5) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token ')' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 28 Reading a token Next token is token '-' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (4) Shifting token "number" (4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4) -> $$ = nterm exp (4) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (256) Shifting token "number" (256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (256) -> $$ = nterm exp (256) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token ')' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (64) Shifting token "number" (64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (64) -> $$ = nterm exp (64) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (7) Shifting token "number" (7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7) -> $$ = nterm exp (7) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (5) Shifting token "number" (5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5) -> $$ = nterm exp (5) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token ')' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 28 Reading a token Next token is token '-' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (4) Shifting token "number" (4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4) -> $$ = nterm exp (4) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (256) Shifting token "number" (256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (256) -> $$ = nterm exp (256) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token ')' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (64) Shifting token "number" (64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (64) -> $$ = nterm exp (64) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () input: | 1 2 ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token "number" (2) syntax error Error: popping nterm exp (1) Cleanup: discarding lookahead token "number" (2) ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token "number" (2) syntax error Error: popping nterm exp (1) Cleanup: discarding lookahead token "number" (2) ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr input: | 1//2 ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '/' () syntax error Error: popping token '/' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '/' () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '/' () syntax error Error: popping token '/' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '/' () ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr input: | error ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" () syntax error Cleanup: discarding lookahead token "invalid token" () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" () syntax error Cleanup: discarding lookahead token "invalid token" () ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr input: | 1 = 2 = 3 ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '=' () syntax error Error: popping nterm exp (2) Error: popping token '=' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '=' () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '=' () syntax error Error: popping nterm exp (2) Error: popping token '=' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '=' () ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr input: | | +1 ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '+' () syntax error Error: popping nterm input () Cleanup: discarding lookahead token '+' () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '+' () syntax error Error: popping nterm input () Cleanup: discarding lookahead token '+' () ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr ./calc.at:1482: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Reading a token Now at end of input. syntax error Cleanup: discarding lookahead token "end of input" () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. syntax error Cleanup: discarding lookahead token "end of input" () ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token ')' () syntax error Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token ')' () syntax error Error: popping token '+' () Error: popping nterm exp (3) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 30 Reading a token Next token is token '*' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '*' () syntax error Error: popping token '*' () Error: popping nterm exp (2) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token ')' () syntax error Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token ')' () syntax error Error: popping token '+' () Error: popping nterm exp (3) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 30 Reading a token Next token is token '*' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '*' () syntax error Error: popping token '*' () Error: popping nterm exp (2) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' () $2 = token '!' () Shifting token error () Entering state 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token "number" (2) syntax error Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token "number" (2) Error: discarding token "number" (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' () $2 = token '!' () Shifting token error () Entering state 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token "number" (2) syntax error Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token "number" (2) Error: discarding token "number" (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' () $2 = token error () Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token "number" (2) syntax error Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token "number" (2) Error: discarding token "number" (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' () $2 = token error () Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token "number" (2) syntax error Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token "number" (2) Error: discarding token "number" (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr input: | (* *) + (*) + (*) ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '\n' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '\n' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' () $2 = token '+' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' () $2 = token '+' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) input: | 1 + 2 * 3 + !- ++ ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' () $2 = token '-' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' () $2 = token '-' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr input: | (#) + (#) = 2222 ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2222) Shifting token "number" (2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2222) -> $$ = nterm exp (2222) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2222) Shifting token "number" (2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2222) -> $$ = nterm exp (2222) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr input: | (1 + #) = 1111 ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr input: | (# + 1) = 1111 ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token "number" (1) Error: discarding token "number" (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token "number" (1) Error: discarding token "number" (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token "number" (1) Error: discarding token "number" (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token "number" (1) Error: discarding token "number" (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1482: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token ')' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Reading a token Next token is token '\n' () Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token ')' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Reading a token Next token is token '\n' () Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1482: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1482: cat stderr 557. calc.at:1482: ok 561. calc.at:1486: testing Calculator glr2.cc parse.error=verbose %debug %name-prefix "calc" %verbose ... ./calc.at:1486: mv calc.y.tmp calc.y ./calc.at:1486: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1486: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from calc.cc:98: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2198:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at calc.cc:2037:66, inlined from 'void calc::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, calc::parser::glr_state*, calc::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2463:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2198:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:2023:58, inlined from 'void calc::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const calc::parser::value_type&)' at calc.cc:2985:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2198:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:2023:58, inlined from 'void calc::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, calc::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2968:58, inlined from 'YYRESULTTAG calc::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at calc.cc:2956:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./calc.at:1485: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '=' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (7) Shifting token number (7) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (7) -> $$ = nterm exp (7) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Next token is token '=' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (5) Shifting token number (5) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (5) -> $$ = nterm exp (5) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token ')' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 28 Reading a token Next token is token '-' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (4) Shifting token number (4) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (4) -> $$ = nterm exp (4) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (256) Shifting token number (256) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (256) -> $$ = nterm exp (256) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token ')' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (64) Shifting token number (64) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (64) -> $$ = nterm exp (64) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '=' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (7) Shifting token number (7) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (7) -> $$ = nterm exp (7) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Next token is token '=' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (5) Shifting token number (5) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (5) -> $$ = nterm exp (5) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token ')' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 28 Reading a token Next token is token '-' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (4) Shifting token number (4) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (4) -> $$ = nterm exp (4) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (256) Shifting token number (256) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (256) -> $$ = nterm exp (256) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token ')' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (64) Shifting token number (64) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (64) -> $$ = nterm exp (64) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () input: | 1 2 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Cleanup: discarding lookahead token number (2) ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Cleanup: discarding lookahead token number (2) ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | 1//2 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '/' () syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '/' () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '/' () syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '/' () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | error ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token invalid token () syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token invalid token () syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | 1 = 2 = 3 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '=' () syntax error, unexpected '=' Error: popping nterm exp (2) Error: popping token '=' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '=' () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '=' () syntax error, unexpected '=' Error: popping nterm exp (2) Error: popping token '=' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '=' () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | | +1 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '+' () syntax error, unexpected '+' Error: popping nterm input () Cleanup: discarding lookahead token '+' () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '+' () syntax error, unexpected '+' Error: popping nterm input () Cleanup: discarding lookahead token '+' () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr ./calc.at:1485: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Reading a token Now at end of input. syntax error, unexpected end of file Cleanup: discarding lookahead token end of file () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. syntax error, unexpected end of file Cleanup: discarding lookahead token end of file () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' () Error: popping nterm exp (3) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 30 Reading a token Next token is token '*' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' () Error: popping nterm exp (2) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' () Error: popping nterm exp (3) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 30 Reading a token Next token is token '*' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' () Error: popping nterm exp (2) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Reducing stack 0 by rule 16 (line 120): $1 = token '!' () $2 = token '!' () Shifting token error () Entering state 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Reducing stack 0 by rule 16 (line 120): $1 = token '!' () $2 = token '!' () Shifting token error () Entering state 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 9 Reducing stack 0 by rule 15 (line 119): $1 = token '-' () $2 = token error () Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 9 Reducing stack 0 by rule 15 (line 119): $1 = token '-' () $2 = token error () Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | (* *) + (*) + (*) ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '\n' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '\n' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Reducing stack 0 by rule 17 (line 121): $1 = token '!' () $2 = token '+' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Reducing stack 0 by rule 17 (line 121): $1 = token '!' () $2 = token '+' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) input: | 1 + 2 * 3 + !- ++ ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Reducing stack 0 by rule 18 (line 122): $1 = token '!' () $2 = token '-' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Reducing stack 0 by rule 18 (line 122): $1 = token '!' () $2 = token '-' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | (#) + (#) = 2222 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2222) Shifting token number (2222) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2222) -> $$ = nterm exp (2222) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2222) Shifting token number (2222) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2222) -> $$ = nterm exp (2222) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | (1 + #) = 1111 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr input: | (# + 1) = 1111 ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () stderr: ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stdout: ./calc.at:1486: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc ./calc.at:1485: cat stderr input: input: | (1 + # + 1) = 1111 ./calc.at:1485: $PREPARSER ./calc input | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (7) Shifting token "number" (7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7) -> $$ = nterm exp (7) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (5) Shifting token "number" (5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5) -> $$ = nterm exp (5) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token ')' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 28 Reading a token Next token is token '-' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (4) Shifting token "number" (4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4) -> $$ = nterm exp (4) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (256) Shifting token "number" (256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (256) -> $$ = nterm exp (256) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token ')' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (64) Shifting token "number" (64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (64) -> $$ = nterm exp (64) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (7) Shifting token "number" (7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7) -> $$ = nterm exp (7) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (5) Shifting token "number" (5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5) -> $$ = nterm exp (5) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token ')' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 28 Reading a token Next token is token '-' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (4) Shifting token "number" (4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4) -> $$ = nterm exp (4) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (256) Shifting token "number" (256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (256) -> $$ = nterm exp (256) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token ')' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (64) Shifting token "number" (64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (64) -> $$ = nterm exp (64) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1485: cat stderr input: | 1 2 ./calc.at:1486: $PREPARSER ./calc input input: | (1 + 1) / (1 - 1) ./calc.at:1485: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token "number" (2) syntax error, unexpected number Error: popping nterm exp (1) Cleanup: discarding lookahead token "number" (2) ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token ')' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Reading a token Next token is token '\n' () Reducing stack 0 by rule 10 (line 106): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1485: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token "number" (2) syntax error, unexpected number Error: popping nterm exp (1) Cleanup: discarding lookahead token "number" (2) stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token ')' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Reading a token Next token is token '\n' () Reducing stack 0 by rule 10 (line 106): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | 1//2 ./calc.at:1486: $PREPARSER ./calc input ./calc.at:1485: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1485: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '/' () syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '/' () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 559. calc.at:1485: ok stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '/' () syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '/' () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | error ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" () syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" () syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | 1 = 2 = 3 ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '=' () syntax error, unexpected '=' Error: popping nterm exp (2) Error: popping token '=' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '=' () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '=' () syntax error, unexpected '=' Error: popping nterm exp (2) Error: popping token '=' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '=' () 562. calc.at:1487: testing Calculator glr2.cc parse.error=custom %debug %name-prefix "calc" %verbose ... ./calc.at:1487: mv calc.y.tmp calc.y ./calc.at:1487: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | | +1 ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '+' () syntax error, unexpected '+' Error: popping nterm input () Cleanup: discarding lookahead token '+' () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '+' () syntax error, unexpected '+' Error: popping nterm input () Cleanup: discarding lookahead token '+' () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr ./calc.at:1486: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Reading a token Now at end of input. syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' () Error: popping nterm exp (3) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 30 Reading a token Next token is token '*' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' () Error: popping nterm exp (2) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' () Error: popping nterm exp (3) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 30 Reading a token Next token is token '*' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' () Error: popping nterm exp (2) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1487: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' () $2 = token '!' () Shifting token error () Entering state 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token "number" (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token "number" (2) Error: discarding token "number" (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' () $2 = token '!' () Shifting token error () Entering state 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token "number" (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token "number" (2) Error: discarding token "number" (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' () $2 = token error () Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token "number" (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token "number" (2) Error: discarding token "number" (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' () $2 = token error () Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token "number" (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token "number" (2) Error: discarding token "number" (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | (* *) + (*) + (*) ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '\n' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '\n' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' () $2 = token '+' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' () $2 = token '+' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) input: | 1 + 2 * 3 + !- ++ ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' () $2 = token '-' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' () $2 = token '-' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | (#) + (#) = 2222 ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2222) Shifting token "number" (2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2222) -> $$ = nterm exp (2222) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2222) Shifting token "number" (2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2222) -> $$ = nterm exp (2222) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | (1 + #) = 1111 ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | (# + 1) = 1111 ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token "number" (1) Error: discarding token "number" (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token "number" (1) Error: discarding token "number" (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token "number" (1) Error: discarding token "number" (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token "number" (1) Error: discarding token "number" (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token ')' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Reading a token Next token is token '\n' () Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token ')' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Reading a token Next token is token '\n' () Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr 560. calc.at:1486: ok 563. calc.at:1489: testing Calculator C++ %glr-parser parse.error=verbose %debug %name-prefix "calc" api.token.prefix={TOK_} %verbose ... ./calc.at:1489: mv calc.y.tmp calc.y ./calc.at:1489: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1489: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS stderr: stdout: ./calc.at:1489: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (7) Shifting token "number" (7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7) -> $$ = nterm exp (7) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (5) Shifting token "number" (5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5) -> $$ = nterm exp (5) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token ')' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 28 Reading a token Next token is token '-' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (4) Shifting token "number" (4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4) -> $$ = nterm exp (4) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (256) Shifting token "number" (256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (256) -> $$ = nterm exp (256) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token ')' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (64) Shifting token "number" (64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (64) -> $$ = nterm exp (64) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (7) Shifting token "number" (7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7) -> $$ = nterm exp (7) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (5) Shifting token "number" (5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5) -> $$ = nterm exp (5) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token ')' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 28 Reading a token Next token is token '-' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token "number" (4) Shifting token "number" (4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4) -> $$ = nterm exp (4) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (256) Shifting token "number" (256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (256) -> $$ = nterm exp (256) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token ')' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (64) Shifting token "number" (64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (64) -> $$ = nterm exp (64) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () input: | 1 2 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token "number" (2) syntax error, unexpected number Error: popping nterm exp (1) Cleanup: discarding lookahead token "number" (2) ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token "number" (2) syntax error, unexpected number Error: popping nterm exp (1) Cleanup: discarding lookahead token "number" (2) ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr input: | 1//2 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '/' () syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '/' () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '/' () syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '/' () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr input: | error ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" () syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" () syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr input: | 1 = 2 = 3 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '=' () syntax error, unexpected '=' Error: popping nterm exp (2) Error: popping token '=' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '=' () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '=' () syntax error, unexpected '=' Error: popping nterm exp (2) Error: popping token '=' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '=' () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr input: | | +1 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '+' () syntax error, unexpected '+' Error: popping nterm input () Cleanup: discarding lookahead token '+' () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '+' () syntax error, unexpected '+' Error: popping nterm input () Cleanup: discarding lookahead token '+' () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr ./calc.at:1489: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Reading a token Now at end of input. syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' () Error: popping nterm exp (3) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 30 Reading a token Next token is token '*' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' () Error: popping nterm exp (2) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' () Error: popping nterm exp (3) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 30 Reading a token Next token is token '*' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' () Error: popping nterm exp (2) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' () $2 = token '!' () Shifting token error () Entering state 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token "number" (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token "number" (2) Error: discarding token "number" (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' () $2 = token '!' () Shifting token error () Entering state 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token "number" (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token "number" (2) Error: discarding token "number" (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1489: $PREPARSER ./calc input stderr: stderr: In file included from /usr/include/c++/12/vector:70, from calc.cc:98: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2190:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at calc.cc:2029:66, inlined from 'void calc::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, calc::parser::glr_state*, calc::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2455:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2190:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:2015:58, inlined from 'void calc::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const calc::parser::value_type&)' at calc.cc:2977:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2190:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:2015:58, inlined from 'void calc::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, calc::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2960:58, inlined from 'YYRESULTTAG calc::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at calc.cc:2948:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' () $2 = token error () Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token "number" (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token "number" (2) Error: discarding token "number" (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1486: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' () $2 = token error () Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token "number" (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token "number" (2) Error: discarding token "number" (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1486: $PREPARSER ./calc input ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr input: | (* *) + (*) + (*) ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '\n' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (7) Shifting token number (7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (7) -> $$ = nterm exp (7) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (5) Shifting token number (5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5) -> $$ = nterm exp (5) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token ')' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 28 Reading a token Next token is token '-' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (4) Shifting token number (4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4) -> $$ = nterm exp (4) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (256) Shifting token number (256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (256) -> $$ = nterm exp (256) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token ')' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (64) Shifting token number (64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (64) -> $$ = nterm exp (64) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '\n' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (7) Shifting token number (7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (7) -> $$ = nterm exp (7) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (5) Shifting token number (5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5) -> $$ = nterm exp (5) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token ')' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 28 Reading a token Next token is token '-' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (4) Shifting token number (4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4) -> $$ = nterm exp (4) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (256) Shifting token number (256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (256) -> $$ = nterm exp (256) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token ')' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (64) Shifting token number (64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (64) -> $$ = nterm exp (64) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () input: | 1 2 ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Cleanup: discarding lookahead token number (2) ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Cleanup: discarding lookahead token number (2) ./calc.at:1489: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1489: $PREPARSER ./calc input ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' () $2 = token '+' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1486: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' () $2 = token '+' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) input: input: | 1//2 ./calc.at:1486: $PREPARSER ./calc input | 1 + 2 * 3 + !- ++ ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '/' () syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '/' () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' () $2 = token '-' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '/' () syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '/' () Starting parse Entering state 0 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (2) Shifting token "number" (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token "number" (3) Shifting token "number" (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' () $2 = token '-' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr ./calc.at:1489: cat stderr input: | error ./calc.at:1486: $PREPARSER ./calc input input: | (#) + (#) = 2222 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token invalid token () syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2222) Shifting token "number" (2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2222) -> $$ = nterm exp (2222) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () stderr: ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Reading a token Next token is token invalid token () syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token () stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (2222) Shifting token "number" (2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2222) -> $$ = nterm exp (2222) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr ./calc.at:1489: cat stderr input: input: | 1 = 2 = 3 ./calc.at:1486: $PREPARSER ./calc input | (1 + #) = 1111 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '=' () syntax error, unexpected '=' Error: popping nterm exp (2) Error: popping token '=' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '=' () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '=' () syntax error, unexpected '=' Error: popping nterm exp (2) Error: popping token '=' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '=' () stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | | +1 ./calc.at:1486: $PREPARSER ./calc input ./calc.at:1489: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '+' () syntax error, unexpected '+' Error: popping nterm input () Cleanup: discarding lookahead token '+' () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: | (# + 1) = 1111 Starting parse Entering state 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '+' () syntax error, unexpected '+' Error: popping nterm input () Cleanup: discarding lookahead token '+' () ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token "number" (1) Error: discarding token "number" (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token "number" (1) Error: discarding token "number" (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: cat stderr ./calc.at:1486: $PREPARSER ./calc /dev/null stderr: ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 Starting parse Entering state 0 Reading a token Now at end of input. syntax error, unexpected end of input Cleanup: discarding lookahead token end of input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1489: cat stderr Starting parse Entering state 0 Reading a token Now at end of input. syntax error, unexpected end of input Cleanup: discarding lookahead token end of input () input: | (1 + # + 1) = 1111 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token "number" (1) Error: discarding token "number" (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1486: cat stderr Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token "number" (1) Error: discarding token "number" (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token "number" (1111) Shifting token "number" (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1486: $PREPARSER ./calc input ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' () Error: popping nterm exp (3) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 30 Reading a token Next token is token '*' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' () Error: popping nterm exp (2) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: cat stderr ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' () Error: popping nterm exp (3) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 30 Reading a token Next token is token '*' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' () Error: popping nterm exp (2) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () input: | (1 + 1) / (1 - 1) ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token ')' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Reading a token Next token is token '\n' () Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token ')' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token "number" (1) Shifting token "number" (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1) -> $$ = nterm exp (1) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Reading a token Next token is token '\n' () Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token "end of input" () Entering state 16 Cleanup: popping token "end of input" () Cleanup: popping nterm input () ./calc.at:1486: cat stderr input: ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 | (!!) + (1 2) = 1 ./calc.at:1486: $PREPARSER ./calc input ./calc.at:1489: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' () $2 = token '!' () Shifting token error () Entering state 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 563. calc.at:1489: ok stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' () $2 = token '!' () Shifting token error () Entering state 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' () $2 = token error () Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' () $2 = token error () Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | (* *) + (*) + (*) ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '\n' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 564. calc.at:1489: testing Calculator glr2.cc parse.error=verbose %debug %name-prefix "calc" api.token.prefix={TOK_} %verbose ... ./calc.at:1489: mv calc.y.tmp calc.y ./calc.at:1489: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '\n' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' () $2 = token '+' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' () $2 = token '+' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) input: | 1 + 2 * 3 + !- ++ ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' () $2 = token '-' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' () $2 = token '-' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1489: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc $LIBS ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | (#) + (#) = 2222 ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2222) Shifting token number (2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2222) -> $$ = nterm exp (2222) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2222) Shifting token number (2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2222) -> $$ = nterm exp (2222) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | (1 + #) = 1111 ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | (# + 1) = 1111 ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1486: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token ')' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Reading a token Next token is token '\n' () Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1486: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token ')' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Reading a token Next token is token '\n' () Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1486: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1486: cat stderr 561. calc.at:1486: ok 565. calc.at:1491: testing Calculator C++ %glr-parser %locations %header parse.error=verbose %debug %name-prefix "calc" %verbose %parse-param {semantic_value *result}{int *count}{int *nerrs} ... ./calc.at:1491: mv calc.y.tmp calc.y ./calc.at:1491: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1491: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc calc-lex.cc calc-main.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from calc.cc:98: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2194:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at calc.cc:2033:66, inlined from 'void calc::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, calc::parser::glr_state*, calc::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2459:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2194:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:2019:58, inlined from 'void calc::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const calc::parser::value_type&)' at calc.cc:2980:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2194:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:2019:58, inlined from 'void calc::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, calc::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2963:58, inlined from 'YYRESULTTAG calc::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at calc.cc:2951:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./calc.at:1487: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1487: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '=' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (7) Shifting token number (7) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (7) -> $$ = nterm exp (7) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Next token is token '=' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (5) Shifting token number (5) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (5) -> $$ = nterm exp (5) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token ')' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 28 Reading a token Next token is token '-' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (4) Shifting token number (4) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (4) -> $$ = nterm exp (4) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (256) Shifting token number (256) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (256) -> $$ = nterm exp (256) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token ')' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (64) Shifting token number (64) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (64) -> $$ = nterm exp (64) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '=' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (7) Shifting token number (7) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (7) -> $$ = nterm exp (7) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Next token is token '=' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (5) Shifting token number (5) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (5) -> $$ = nterm exp (5) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token ')' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 28 Reading a token Next token is token '-' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (4) Shifting token number (4) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (4) -> $$ = nterm exp (4) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 115): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (256) Shifting token number (256) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (256) -> $$ = nterm exp (256) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token ')' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 116): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (64) Shifting token number (64) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (64) -> $$ = nterm exp (64) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 83): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () input: | 1 2 ./calc.at:1487: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token number (2) syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) Error: popping nterm exp (1) Cleanup: discarding lookahead token number (2) ./calc.at:1487: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token number (2) syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] ['\n']) Error: popping nterm exp (1) Cleanup: discarding lookahead token number (2) ./calc.at:1487: cat stderr input: | 1//2 ./calc.at:1487: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '/' () syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) Error: popping token '/' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '/' () ./calc.at:1487: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '/' () syntax error on token ['/'] (expected: [number] ['-'] ['('] ['!']) Error: popping token '/' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '/' () ./calc.at:1487: cat stderr input: | error ./calc.at:1487: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token invalid token () syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) Cleanup: discarding lookahead token invalid token () ./calc.at:1487: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token invalid token () syntax error on token [invalid token] (expected: [number] ['-'] ['\n'] ['('] ['!']) Cleanup: discarding lookahead token invalid token () ./calc.at:1487: cat stderr input: | 1 = 2 = 3 ./calc.at:1487: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '=' () syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) Error: popping nterm exp (2) Error: popping token '=' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '=' () ./calc.at:1487: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '=' () syntax error on token ['='] (expected: ['-'] ['+'] ['*'] ['/'] ['^']) Error: popping nterm exp (2) Error: popping token '=' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '=' () ./calc.at:1487: cat stderr input: | | +1 ./calc.at:1487: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '+' () syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) Error: popping nterm input () Cleanup: discarding lookahead token '+' () ./calc.at:1487: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 87): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '+' () syntax error on token ['+'] (expected: [end of file] [number] ['-'] ['\n'] ['('] ['!']) Error: popping nterm input () Cleanup: discarding lookahead token '+' () ./calc.at:1487: cat stderr ./calc.at:1487: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Reading a token Now at end of input. syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) Cleanup: discarding lookahead token end of file () ./calc.at:1487: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. syntax error on token [end of file] (expected: [number] ['-'] ['\n'] ['('] ['!']) Cleanup: discarding lookahead token end of file () ./calc.at:1487: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1487: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token ')' () syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token ')' () syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) Error: popping token '+' () Error: popping nterm exp (3) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 30 Reading a token Next token is token '*' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '*' () syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) Error: popping token '*' () Error: popping nterm exp (2) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token ')' () syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token ')' () syntax error on token [')'] (expected: [number] ['-'] ['('] ['!']) Error: popping token '+' () Error: popping nterm exp (3) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 30 Reading a token Next token is token '*' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '*' () syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) Error: popping token '*' () Error: popping nterm exp (2) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1487: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Reducing stack 0 by rule 16 (line 120): $1 = token '!' () $2 = token '!' () Shifting token error () Entering state 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Reducing stack 0 by rule 16 (line 120): $1 = token '!' () $2 = token '!' () Shifting token error () Entering state 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1487: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '*' () syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) Shifting token error () Entering state 9 Reducing stack 0 by rule 15 (line 119): $1 = token '-' () $2 = token error () Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '*' () syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) Shifting token error () Entering state 9 Reducing stack 0 by rule 15 (line 119): $1 = token '-' () $2 = token error () Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error on token [number] (expected: ['='] ['-'] ['+'] ['*'] ['/'] ['^'] [')']) Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: cat stderr input: | (* *) + (*) + (*) ./calc.at:1487: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '\n' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error on token ['*'] (expected: [number] ['-'] ['('] ['!']) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '\n' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1487: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Reducing stack 0 by rule 17 (line 121): $1 = token '!' () $2 = token '+' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1487: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Reducing stack 0 by rule 17 (line 121): $1 = token '!' () $2 = token '+' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) input: | 1 + 2 * 3 + !- ++ ./calc.at:1487: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Reducing stack 0 by rule 18 (line 122): $1 = token '!' () $2 = token '-' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1487: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 105): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Reducing stack 0 by rule 18 (line 122): $1 = token '!' () $2 = token '-' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1487: cat stderr input: | (#) + (#) = 2222 ./calc.at:1487: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2222) Shifting token number (2222) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2222) -> $$ = nterm exp (2222) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2222) Shifting token number (2222) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (2222) -> $$ = nterm exp (2222) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: cat stderr input: | (1 + #) = 1111 ./calc.at:1487: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: cat stderr input: | (# + 1) = 1111 ./calc.at:1487: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1487: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 118): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 93): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1487: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token ')' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Reading a token Next token is token '\n' () Reducing stack 0 by rule 10 (line 106): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token ')' () Reducing stack 0 by rule 7 (line 103): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 92): $1 = token number (1) -> $$ = nterm exp (1) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 104): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 117): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Reading a token Next token is token '\n' () Reducing stack 0 by rule 10 (line 106): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 88): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 82): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of file () Entering state 16 Cleanup: popping token end of file () Cleanup: popping nterm input () ./calc.at:1487: cat stderr 562. calc.at:1487: ok 566. calc.at:1491: testing Calculator glr2.cc %locations %header parse.error=verbose %debug %name-prefix "calc" %verbose %parse-param {semantic_value *result}{int *count}{int *nerrs} ... ./calc.at:1491: mv calc.y.tmp calc.y ./calc.at:1491: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1491: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc calc-lex.cc calc-main.cc $LIBS stderr: stdout: ./calc.at:1491: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc calc.hh input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 16 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 16 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) input: | 1 2 ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.3: 2) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.3: 2) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr input: | 1//2 ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr input: | error ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr input: | 1 = 2 = 3 ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr input: | | +1 ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr ./calc.at:1491: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr input: | (* *) + (*) + (*) ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) input: | 1 + 2 * 3 + !- ++ ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr stderr: In file included from /usr/include/c++/12/vector:70, from calc.cc:98: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2190:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at calc.cc:2029:66, inlined from 'void calc::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, calc::parser::glr_state*, calc::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2455:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2190:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:2015:58, inlined from 'void calc::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const calc::parser::value_type&)' at calc.cc:2977:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:2190:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:2015:58, inlined from 'void calc::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, calc::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2960:58, inlined from 'YYRESULTTAG calc::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at calc.cc:2948:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./calc.at:1489: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc input: | (#) + (#) = 2222 ./calc.at:1491: $PREPARSER ./calc input input: stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1489: $PREPARSER ./calc input ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr stderr: input: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (7) Shifting token number (7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (7) -> $$ = nterm exp (7) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (5) Shifting token number (5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5) -> $$ = nterm exp (5) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token ')' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1) $2 = token '\n' () | (1 + #) = 1111 -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 28 Reading a token Next token is token '-' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (4) Shifting token number (4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4) -> $$ = nterm exp (4) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (256) Shifting token number (256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (256) -> $$ = nterm exp (256) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token ')' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (64) Shifting token number (64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (64) -> $$ = nterm exp (64) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (7) Shifting token number (7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (7) -> $$ = nterm exp (7) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7) $2 = token '=' () $3 = nterm exp (7) -> $$ = nterm exp (7) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (3) -> $$ = nterm exp (-3) Entering state 30 Next token is token '=' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (-3) -> $$ = nterm exp (-6) Entering state 29 Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (-6) -> $$ = nterm exp (-5) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (5) Shifting token number (5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5) -> $$ = nterm exp (5) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (5) -> $$ = nterm exp (-5) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-5) $2 = token '=' () $3 = nterm exp (-5) -> $$ = nterm exp (-5) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-5) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token ')' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (-1) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1) $2 = token '=' () $3 = nterm exp (1) -> $$ = nterm exp (1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (-1) -> $$ = nterm exp (1) Entering state 10 Next token is token '=' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (1) -> $$ = nterm exp (-1) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-1) $2 = token '=' () $3 = nterm exp (-1) -> $$ = nterm exp (-1) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-1) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 28 Reading a token Next token is token '-' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (2) -> $$ = nterm exp (-1) Entering state 8 Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (-1) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-4) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token number (4) Shifting token number (4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4) -> $$ = nterm exp (4) Entering state 10 Reading a token Next token is token '\n' () Reducing stack 0 by rule 11 (line 102): $1 = token '-' () $2 = nterm exp (4) -> $$ = nterm exp (-4) Entering state 27 Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (-4) $2 = token '=' () $3 = nterm exp (-4) -> $$ = nterm exp (-4) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (-4) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (2) $2 = token '-' () $3 = nterm exp (3) -> $$ = nterm exp (-1) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (-1) $3 = token ')' () -> $$ = nterm exp (-1) Entering state 28 Reading a token Next token is token '=' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (-1) -> $$ = nterm exp (2) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2) $2 = token '=' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (8) Entering state 32 Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (8) -> $$ = nterm exp (256) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (256) Shifting token number (256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (256) -> $$ = nterm exp (256) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (256) $2 = token '=' () $3 = nterm exp (256) -> $$ = nterm exp (256) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (256) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 12 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 32 Reading a token Next token is token ')' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (2) $2 = token '^' () $3 = nterm exp (2) -> $$ = nterm exp (4) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (4) $3 = token ')' () -> $$ = nterm exp (4) Entering state 8 Reading a token Next token is token '^' () Shifting token '^' () Entering state 23 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 32 Reading a token Next token is token '=' () Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4) $2 = token '^' () $3 = nterm exp (3) -> $$ = nterm exp (64) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (64) Shifting token number (64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (64) -> $$ = nterm exp (64) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (64) $2 = token '=' () $3 = nterm exp (64) -> $$ = nterm exp (64) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (64) $2 = token '\n' () -> $$ = nterm line () Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input () $2 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | 1 2 ./calc.at:1489: $PREPARSER ./calc input ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Cleanup: discarding lookahead token number (2) ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1491: cat stderr stderr: input: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Cleanup: discarding lookahead token number (2) | (# + 1) = 1111 ./calc.at:1491: $PREPARSER ./calc input ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1489: cat stderr ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | 1//2 ./calc.at:1489: $PREPARSER ./calc input stderr: ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '/' () syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '/' () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '/' () syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '/' () ./calc.at:1491: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1491: $PREPARSER ./calc input ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: ./calc.at:1489: cat stderr Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: input: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) | error ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token invalid token () syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Reading a token Next token is token invalid token () syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token () ./calc.at:1491: cat stderr ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: ./calc.at:1489: cat stderr | (1 + 1) / (1 - 1) ./calc.at:1491: $PREPARSER ./calc input input: | 1 = 2 = 3 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '=' () syntax error, unexpected '=' Error: popping nterm exp (2) Error: popping token '=' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '=' () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 27 Reading a token Next token is token '=' () syntax error, unexpected '=' Error: popping nterm exp (2) Error: popping token '=' () Error: popping nterm exp (1) Cleanup: discarding lookahead token '=' () stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr ./calc.at:1491: cat stderr 565. calc.at:1491: ok input: | | +1 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '+' () syntax error, unexpected '+' Error: popping nterm input () Cleanup: discarding lookahead token '+' () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' () Shifting token '\n' () Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Next token is token '+' () syntax error, unexpected '+' Error: popping nterm input () Cleanup: discarding lookahead token '+' () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr ./calc.at:1489: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Reading a token Now at end of input. syntax error, unexpected end of input Cleanup: discarding lookahead token end of input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. syntax error, unexpected end of input Cleanup: discarding lookahead token end of input () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' () Error: popping nterm exp (3) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 30 Reading a token Next token is token '*' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' () Error: popping nterm exp (2) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (3) Entering state 12 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token ')' () syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' () Error: popping nterm exp (3) Shifting token error () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 30 Reading a token Next token is token '*' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1) $2 = token '*' () $3 = nterm exp (2) -> $$ = nterm exp (2) Entering state 12 Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' () Error: popping nterm exp (2) Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (3333) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (4444) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4444) $2 = token '=' () $3 = nterm exp (1) error: 4444 != 1 -> $$ = nterm exp (4444) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4444) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 567. calc.at:1492: testing Calculator C++ %glr-parser %locations %header parse.error=verbose %debug api.prefix={calc} %verbose %parse-param {semantic_value *result}{int *count}{int *nerrs} ... ./calc.at:1492: mv calc.y.tmp calc.y ./calc.at:1492: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1489: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' () $2 = token '!' () Shifting token error () Entering state 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '!' () Shifting token '!' () Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' () $2 = token '!' () Shifting token error () Entering state 11 Reading a token Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' () $2 = token error () Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '-' () Shifting token '-' () Entering state 2 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' () $2 = token error () Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token number (2) syntax error, unexpected number Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token number (2) Error: discarding token number (2) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (1) error: 2222 != 1 -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr input: | (* *) + (*) + (*) ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '\n' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token '*' () syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error () Entering state 11 Next token is token '*' () Error: discarding token '*' () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '\n' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2222) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (3333) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (3333) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' () $2 = token '+' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '+' () Shifting token '+' () Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' () $2 = token '+' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) input: | 1 + 2 * 3 + !- ++ ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' () $2 = token '-' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1492: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc calc-lex.cc calc-main.cc $LIBS stderr: Starting parse Entering state 0 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (2) Shifting token number (2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2) -> $$ = nterm exp (2) Entering state 29 Reading a token Next token is token '*' () Shifting token '*' () Entering state 21 Reading a token Next token is token number (3) Shifting token number (3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (3) -> $$ = nterm exp (3) Entering state 30 Reading a token Next token is token '+' () Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2) $2 = token '*' () $3 = nterm exp (3) -> $$ = nterm exp (6) Entering state 29 Next token is token '+' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (6) -> $$ = nterm exp (7) Entering state 8 Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '!' () Shifting token '!' () Entering state 5 Reading a token Next token is token '-' () Shifting token '-' () Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' () $2 = token '-' () Cleanup: popping token '+' () Cleanup: popping nterm exp (7) ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr input: | (#) + (#) = 2222 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2222) Shifting token number (2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2222) -> $$ = nterm exp (2222) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 29 Reading a token Next token is token '=' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1111) $2 = token '+' () $3 = nterm exp (1111) -> $$ = nterm exp (2222) Entering state 8 Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (2222) Shifting token number (2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2222) -> $$ = nterm exp (2222) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2222) $2 = token '=' () $3 = nterm exp (2222) -> $$ = nterm exp (2222) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2222) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr input: | (1 + #) = 1111 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr input: | (# + 1) = 1111 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token syntax error: invalid character: '#' Next token is token error () Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token syntax error: invalid character: '#' Next token is token error () Error: popping token '+' () Error: popping nterm exp (1) Shifting token error () Entering state 11 Next token is token error () Error: discarding token error () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token number (1) Error: discarding token number (1) Reading a token Next token is token ')' () Entering state 11 Next token is token ')' () Shifting token ')' () Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' () $2 = token error () $3 = token ')' () -> $$ = nterm exp (1111) Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 18 Reading a token Next token is token number (1111) Shifting token number (1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1111) -> $$ = nterm exp (1111) Entering state 27 Reading a token Next token is token '\n' () Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1111) $2 = token '=' () $3 = nterm exp (1111) -> $$ = nterm exp (1111) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1111) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1489: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token ')' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Reading a token Next token is token '\n' () Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '+' () Shifting token '+' () Entering state 20 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 29 Reading a token Next token is token ')' () Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1) $2 = token '+' () $3 = nterm exp (1) -> $$ = nterm exp (2) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (2) $3 = token ')' () -> $$ = nterm exp (2) Entering state 8 Reading a token Next token is token '/' () Shifting token '/' () Entering state 22 Reading a token Next token is token '(' () Shifting token '(' () Entering state 4 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 12 Reading a token Next token is token '-' () Shifting token '-' () Entering state 19 Reading a token Next token is token number (1) Shifting token number (1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1) -> $$ = nterm exp (1) Entering state 28 Reading a token Next token is token ')' () Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1) $2 = token '-' () $3 = nterm exp (1) -> $$ = nterm exp (0) Entering state 12 Next token is token ')' () Shifting token ')' () Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' () $2 = nterm exp (0) $3 = token ')' () -> $$ = nterm exp (0) Entering state 31 Reading a token Next token is token '\n' () Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (2) $2 = token '/' () $3 = nterm exp (0) error: null divisor -> $$ = nterm exp (2) Entering state 8 Next token is token '\n' () Shifting token '\n' () Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2) $2 = token '\n' () -> $$ = nterm line () Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line () -> $$ = nterm input () Entering state 6 Reading a token Now at end of input. Shifting token end of input () Entering state 16 Cleanup: popping token end of input () Cleanup: popping nterm input () ./calc.at:1489: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1489: cat stderr 564. calc.at:1489: ok 568. calc.at:1492: testing Calculator glr2.cc %locations %header parse.error=verbose %debug api.prefix={calc} %verbose %parse-param {semantic_value *result}{int *count}{int *nerrs} ... ./calc.at:1492: mv calc.y.tmp calc.y ./calc.at:1492: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1492: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc calc-lex.cc calc-main.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from calc.hh:59, from calc.cc:80: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:1815:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at calc.cc:1651:66, inlined from 'void calc::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, calc::parser::glr_state*, calc::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2087:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:1815:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:1637:58, inlined from 'void calc::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const calc::parser::value_type&, const calc::parser::location_type&)' at calc.cc:2633:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:1815:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:1637:58, inlined from 'void calc::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, calc::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2616:58, inlined from 'YYRESULTTAG calc::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at calc.cc:2604:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./calc.at:1491: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc calc.hh input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (14.1: ) Entering state 16 Cleanup: popping token end of input (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (14.1: ) Entering state 16 Cleanup: popping token end of input (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) input: | 1 2 ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token number (1.3: 2) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token number (1.3: 2) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr input: | 1//2 ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr input: | error ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr input: | 1 = 2 = 3 ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr input: | | +1 ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr ./calc.at:1491: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token end of input (1.1: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token end of input (1.1: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: stdout: ./calc.at:1492: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc calc.hh ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: ./calc.at:1491: cat stderr | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1492: $PREPARSER ./calc input input: | (- *) + (1 2) = 1 ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 16 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 16 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | 1 2 ./calc.at:1492: $PREPARSER ./calc input ./calc.at:1491: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.3: 2) input: ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr | (* *) + (*) + (*) ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.3: 2) stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1492: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | 1//2 ./calc.at:1492: $PREPARSER ./calc input ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1491: cat stderr ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) input: | 1 + 2 * 3 + !+ ++ ./calc.at:1491: $PREPARSER ./calc input stderr: ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1492: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) input: input: | 1 + 2 * 3 + !- ++ ./calc.at:1491: $PREPARSER ./calc input | error ./calc.at:1492: $PREPARSER ./calc input stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr ./calc.at:1492: cat stderr input: | (#) + (#) = 2222 input: ./calc.at:1491: $PREPARSER ./calc input | 1 = 2 = 3 ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr ./calc.at:1491: cat stderr input: | | +1 ./calc.at:1492: $PREPARSER ./calc input input: | (1 + #) = 1111 ./calc.at:1491: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: cat stderr ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: $PREPARSER ./calc /dev/null ./calc.at:1491: cat stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr input: stderr: | (# + 1) = 1111 ./calc.at:1491: $PREPARSER ./calc input Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) stderr: ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1492: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1492: $PREPARSER ./calc input ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) input: | (1 + # + 1) = 1111 ./calc.at:1491: $PREPARSER ./calc input ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 input: | (!!) + (1 2) = 1 ./calc.at:1492: $PREPARSER ./calc input ./calc.at:1491: cat stderr stderr: input: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) | (1 + 1) / (1 - 1) ./calc.at:1491: $PREPARSER ./calc input ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1491: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1492: $PREPARSER ./calc input ./calc.at:1491: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1491: cat stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 566. calc.at:1491: ok stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | (* *) + (*) + (*) ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) 569. calc.at:1494: testing Calculator C++ %glr-parser %no-lines %locations %header parse.error=verbose %debug api.prefix={calc} %verbose %parse-param {semantic_value *result}{int *count}{int *nerrs} ... ./calc.at:1494: mv calc.y.tmp calc.y input: | 1 + 2 * 3 + !- ++ ./calc.at:1492: $PREPARSER ./calc input ./calc.at:1494: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | (#) + (#) = 2222 ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | (1 + #) = 1111 ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | (# + 1) = 1111 ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1492: $PREPARSER ./calc input ./calc.at:1494: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc calc-lex.cc calc-main.cc $LIBS stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr 567. calc.at:1492: ok 570. calc.at:1494: testing Calculator glr2.cc %no-lines %locations %header parse.error=verbose %debug api.prefix={calc} %verbose %parse-param {semantic_value *result}{int *count}{int *nerrs} ... ./calc.at:1494: mv calc.y.tmp calc.y ./calc.at:1494: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.cc calc.y ./calc.at:1494: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o calc calc.cc calc-lex.cc calc-main.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from calc.hh:59, from calc.cc:80: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:1815:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:1637:58, inlined from 'void calc::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const calc::parser::value_type&, const calc::parser::location_type&)' at calc.cc:2633:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:1815:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at calc.cc:1651:66, inlined from 'void calc::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, calc::parser::glr_state*, calc::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2087:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:1815:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:1637:58, inlined from 'void calc::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, calc::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2616:58, inlined from 'YYRESULTTAG calc::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at calc.cc:2604:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./calc.at:1492: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc calc.hh input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (14.1: ) Entering state 16 Cleanup: popping token end of input (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (14.1: ) Entering state 16 Cleanup: popping token end of input (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) input: | 1 2 ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token number (1.3: 2) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token number (1.3: 2) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | 1//2 ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | error ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | 1 = 2 = 3 ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | | +1 ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr ./calc.at:1492: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token end of input (1.1: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token end of input (1.1: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | (* *) + (*) + (*) ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) input: | 1 + 2 * 3 + !- ++ ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | (#) + (#) = 2222 ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | (1 + #) = 1111 ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | (# + 1) = 1111 ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1492: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1492: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1492: cat stderr 568. calc.at:1492: ok 571. calc.at:1504: testing Calculator lalr1.d ... ./calc.at:1504: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.d calc.y 571. calc.at:1504: skipped (calc.at:1504) 572. calc.at:1509: testing Calculator D ... ./calc.at:1509: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.d calc.y 572. calc.at:1509: skipped (calc.at:1509) 573. calc.at:1510: testing Calculator D %locations ... ./calc.at:1510: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.d calc.y 573. calc.at:1510: skipped (calc.at:1510) 574. calc.at:1512: testing Calculator D parse.error=detailed api.prefix={calc} %verbose ... ./calc.at:1512: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.d calc.y 574. calc.at:1512: skipped (calc.at:1512) 575. calc.at:1514: testing Calculator D %debug ... ./calc.at:1514: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.d calc.y 575. calc.at:1514: skipped (calc.at:1514) 576. calc.at:1516: testing Calculator D parse.error=custom ... ./calc.at:1516: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.d calc.y 576. calc.at:1516: skipped (calc.at:1516) 577. calc.at:1517: testing Calculator D %locations parse.error=custom ... ./calc.at:1517: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.d calc.y 577. calc.at:1517: skipped (calc.at:1517) 578. calc.at:1518: testing Calculator D %locations parse.error=detailed ... ./calc.at:1518: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.d calc.y 578. calc.at:1518: skipped (calc.at:1518) stderr: stdout: ./calc.at:1494: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc calc.hh input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 16 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token "number" (1.13: 7) Shifting token "number" (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token "number" (2.1: 1) Shifting token "number" (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Reading a token Next token is token "number" (2.5: 2) Shifting token "number" (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token "number" (2.10: 3) Shifting token "number" (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token "number" (2.15: 5) Shifting token "number" (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token "number" (4.2: 1) Shifting token "number" (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Reading a token Next token is token "number" (4.4: 2) Shifting token "number" (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token "number" (4.9: 1) Shifting token "number" (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token "number" (5.3: 1) Shifting token "number" (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Reading a token Next token is token "number" (5.6: 2) Shifting token "number" (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Reading a token Next token is token "number" (5.10: 1) Shifting token "number" (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token "number" (7.4: 1) Shifting token "number" (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token "number" (7.9: 1) Shifting token "number" (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token "number" (9.1: 1) Shifting token "number" (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Reading a token Next token is token "number" (9.5: 2) Shifting token "number" (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Reading a token Next token is token "number" (9.9: 3) Shifting token "number" (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token "number" (9.14: 4) Shifting token "number" (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token "number" (10.1: 1) Shifting token "number" (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token "number" (10.6: 2) Shifting token "number" (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Reading a token Next token is token "number" (10.10: 3) Shifting token "number" (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Reading a token Next token is token "number" (10.15: 2) Shifting token "number" (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token "number" (12.1: 2) Shifting token "number" (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Reading a token Next token is token "number" (12.3: 2) Shifting token "number" (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Reading a token Next token is token "number" (12.5: 3) Shifting token "number" (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Reading a token Next token is token "number" (12.9-11: 256) Shifting token "number" (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token "number" (13.2: 2) Shifting token "number" (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Reading a token Next token is token "number" (13.4: 2) Shifting token "number" (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Reading a token Next token is token "number" (13.7: 3) Shifting token "number" (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Reading a token Next token is token "number" (13.11-12: 64) Shifting token "number" (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (14.1: ) Entering state 16 Cleanup: popping token "end of input" (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) input: | 1 2 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.3: 2) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token "number" (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token "number" (1.3: 2) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | 1//2 ./calc.at:1494: $PREPARSER ./calc input 579. calc.at:1519: testing Calculator D %locations parse.error=simple ... ./calc.at:1519: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.d calc.y stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | error ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "invalid token" (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token "invalid token" (1.1: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | 1 = 2 = 3 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) 579. calc.at:1519: ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 skipped (calc.at:1519) ./calc.at:1494: cat stderr input: | | +1 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr ./calc.at:1494: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token "end of input" (1.1: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr 580. calc.at:1520: testing Calculator D parse.error=detailed %debug %verbose ... ./calc.at:1520: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.d calc.y input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token "number" (1.7: 1) Shifting token "number" (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Reading a token Next token is token "number" (1.11: 1) Shifting token "number" (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token "number" (1.15: 1) Shifting token "number" (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token "number" (1.33: 1) Shifting token "number" (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Reading a token Next token is token "number" (1.37: 2) Shifting token "number" (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Reading a token Next token is token "number" (1.46: 1) Shifting token "number" (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token "number" (1.9: 1) Shifting token "number" (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token "number" (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token "number" (1.11: 2) Error: discarding token "number" (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 580. calc.at:1520: skipped (calc.at:1520) ./calc.at:1494: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token "number" (1.10: 1) Shifting token "number" (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token "number" (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token "number" (1.12: 2) Error: discarding token "number" (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Reading a token Next token is token "number" (1.17: 1) Shifting token "number" (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | (* *) + (*) + (*) ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) input: | 1 + 2 * 3 + !- ++ ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) 581. calc.at:1521: testing Calculator D parse.error=detailed %debug api.symbol.prefix={SYMB_} api.token.prefix={TOK_} %verbose ... ./calc.at:1521: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.d calc.y ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token "number" (1.1: 1) Shifting token "number" (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token "number" (1.5: 2) Shifting token "number" (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token "number" (1.9: 3) Shifting token "number" (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | (#) + (#) = 2222 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token "number" (1.13-16: 2222) Shifting token "number" (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 581. calc.at:1521: skipped (calc.at:1521) ./calc.at:1494: cat stderr input: | (1 + #) = 1111 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | (# + 1) = 1111 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token "number" (1.6: 1) Error: discarding token "number" (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token "number" (1.11-14: 1111) Shifting token "number" (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 582. calc.at:1523: testing Calculator D %locations parse.lac=full parse.error=detailed ... ./calc.at:1523: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.d calc.y stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token "number" (1.10: 1) Error: discarding token "number" (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Reading a token Next token is token "number" (1.15-18: 1111) Shifting token "number" (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token "number" (1.2: 1) Shifting token "number" (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token "number" (1.6: 1) Shifting token "number" (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token "number" (1.12: 1) Shifting token "number" (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Reading a token Next token is token "number" (1.16: 1) Shifting token "number" (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token "number" (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token "end of input" (2.1: ) Entering state 16 Cleanup: popping token "end of input" (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr 569. calc.at:1494: ok 582. calc.at:1523: skipped (calc.at:1523) 583. calc.at:1524: testing Calculator D %locations parse.lac=full parse.error=custom ... ./calc.at:1524: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.d calc.y 584. calc.at:1525: testing Calculator D %locations parse.lac=full parse.error=detailed parse.trace ... ./calc.at:1525: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.d calc.y 583. calc.at:1524: skipped (calc.at:1524) 584. calc.at:1525: skipped (calc.at:1525) 585. calc.at:1530: testing Calculator D api.token.constructor %locations parse.error=custom api.value.type=union ... ./calc.at:1530: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.d calc.y 586. calc.at:1531: testing Calculator D api.token.constructor %locations parse.error=detailed ... ./calc.at:1531: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.d calc.y 585. calc.at:1530: skipped (calc.at:1530) 586. calc.at:1531: skipped (calc.at:1531) 587. calc.at:1532: testing Calculator D api.push-pull=both ... ./calc.at:1532: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.d calc.y 588. calc.at:1533: testing Calculator D parse.trace parse.error=custom %locations api.push-pull=both parse.lac=full ... ./calc.at:1533: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o calc.d calc.y 587. calc.at:1532: skipped (calc.at:1532) 588. calc.at:1533: skipped (calc.at:1533) 589. calc.at:1544: testing Calculator Java ... ./calc.at:1544: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o Calc.java Calc.y 590. calc.at:1545: testing Calculator Java parse.error=custom ... ./calc.at:1545: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o Calc.java Calc.y 589. calc.at:1544: skipped (calc.at:1544) 590. calc.at:1545: skipped (calc.at:1545) 591. calc.at:1546: testing Calculator Java parse.error=detailed ... ./calc.at:1546: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o Calc.java Calc.y 592. calc.at:1547: testing Calculator Java parse.error=verbose ... ./calc.at:1547: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o Calc.java Calc.y 591. calc.at:1546: skipped (calc.at:1546) 592. calc.at:1547: skipped (calc.at:1547) 593. calc.at:1548: testing Calculator Java %locations parse.error=custom ... ./calc.at:1548: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o Calc.java Calc.y 594. calc.at:1549: testing Calculator Java %locations parse.error=detailed ... ./calc.at:1549: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o Calc.java Calc.y 593. calc.at:1548: skipped (calc.at:1548) 594. calc.at:1549: skipped (calc.at:1549) 595. calc.at:1550: testing Calculator Java %locations parse.error=verbose ... ./calc.at:1550: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o Calc.java Calc.y 596. calc.at:1551: testing Calculator Java parse.trace parse.error=verbose ... ./calc.at:1551: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o Calc.java Calc.y 595. calc.at:1550: skipped (calc.at:1550) 596. calc.at:1551: skipped (calc.at:1551) 597. calc.at:1552: testing Calculator Java parse.trace parse.error=verbose %locations %lex-param {InputStream is} ... ./calc.at:1552: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o Calc.java Calc.y 598. calc.at:1554: testing Calculator Java api.push-pull=both ... ./calc.at:1554: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o Calc.java Calc.y 597. calc.at:1552: skipped (calc.at:1552) 598. calc.at:1554: skipped (calc.at:1554) 599. calc.at:1555: testing Calculator Java api.push-pull=both parse.error=detailed %locations ... ./calc.at:1555: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o Calc.java Calc.y 600. calc.at:1556: testing Calculator Java parse.trace parse.error=custom %locations %lex-param {InputStream is} api.push-pull=both ... ./calc.at:1556: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o Calc.java Calc.y 599. calc.at:1555: skipped (calc.at:1555) 600. calc.at:1556: skipped (calc.at:1556) 601. calc.at:1557: testing Calculator Java parse.trace parse.error=verbose %locations %lex-param {InputStream is} api.push-pull=both ... ./calc.at:1557: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o Calc.java Calc.y 602. calc.at:1560: testing Calculator Java parse.trace parse.error=custom %locations parse.lac=full ... ./calc.at:1560: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o Calc.java Calc.y 601. calc.at:1557: skipped (calc.at:1557) 602. calc.at:1560: skipped (calc.at:1560) 603. calc.at:1561: testing Calculator Java parse.trace parse.error=custom %locations api.push-pull=both parse.lac=full ... ./calc.at:1561: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated -o Calc.java Calc.y 604. torture.at:132: testing Big triangle ... ./torture.at:138: "$PERL" -w ./gengram.pl 200 || exit 77 603. calc.at:1561: skipped (calc.at:1561) 605. torture.at:216: testing Big horizontal ... ./torture.at:230: "$PERL" -w ./gengram.pl 1000 || exit 77 stdout: %code top { /* -*- c -*- */ /* Adjust to the compiler. We used to do it here, but each time we add a new line, we have to adjust all the line numbers in error messages. It's simpler to use a constant include to a varying file. */ #include } %define parse.error verbose %debug %{ #include #include #define MAX 1000 static int yylex (void); #include /* !POSIX */ static void yyerror (const char *msg); %} %token t1 1 "1" t2 2 "2" t3 3 "3" t4 4 "4" t5 5 "5" t6 6 "6" t7 7 "7" t8 8 "8" t9 9 "9" t10 10 "10" t11 11 "11" t12 12 "12" t13 13 "13" t14 14 "14" t15 15 "15" t16 16 "16" t17 17 "17" t18 18 "18" t19 19 "19" t20 20 "20" t21 21 "21" t22 22 "22" t23 23 "23" t24 24 "24" t25 25 "25" t26 26 "26" t27 27 "27" t28 28 "28" t29 29 "29" t30 30 "30" t31 31 "31" t32 32 "32" t33 33 "33" t34 34 "34" t35 35 "35" t36 36 "36" t37 37 "37" t38 38 "38" t39 39 "39" t40 40 "40" t41 41 "41" t42 42 "42" t43 43 "43" t44 44 "44" t45 45 "45" t46 46 "46" t47 47 "47" t48 48 "48" t49 49 "49" t50 50 "50" t51 51 "51" t52 52 "52" t53 53 "53" t54 54 "54" t55 55 "55" t56 56 "56" t57 57 "57" t58 58 "58" t59 59 "59" t60 60 "60" t61 61 "61" t62 62 "62" t63 63 "63" t64 64 "64" t65 65 "65" t66 66 "66" t67 67 "67" t68 68 "68" t69 69 "69" t70 70 "70" t71 71 "71" t72 72 "72" t73 73 "73" t74 74 "74" t75 75 "75" t76 76 "76" t77 77 "77" t78 78 "78" t79 79 "79" t80 80 "80" t81 81 "81" t82 82 "82" t83 83 "83" t84 84 "84" t85 85 "85" t86 86 "86" t87 87 "87" t88 88 "88" t89 89 "89" t90 90 "90" t91 91 "91" t92 92 "92" t93 93 "93" t94 94 "94" t95 95 "95" t96 96 "96" t97 97 "97" t98 98 "98" t99 99 "99" t100 100 "100" t101 101 "101" t102 102 "102" t103 103 "103" t104 104 "104" t105 105 "105" t106 106 "106" t107 107 "107" t108 108 "108" t109 109 "109" t110 110 "110" t111 111 "111" t112 112 "112" t113 113 "113" t114 114 "114" t115 115 "115" t116 116 "116" t117 117 "117" t118 118 "118" t119 119 "119" t120 120 "120" t121 121 "121" t122 122 "122" t123 123 "123" t124 124 "124" t125 125 "125" t126 126 "126" t127 127 "127" t128 128 "128" t129 129 "129" t130 130 "130" t131 131 "131" t132 132 "132" t133 133 "133" t134 134 "134" t135 135 "135" t136 136 "136" t137 137 "137" t138 138 "138" t139 139 "139" t140 140 "140" t141 141 "141" t142 142 "142" t143 143 "143" t144 144 "144" t145 145 "145" t146 146 "146" t147 147 "147" t148 148 "148" t149 149 "149" t150 150 "150" t151 151 "151" t152 152 "152" t153 153 "153" t154 154 "154" t155 155 "155" t156 156 "156" t157 157 "157" t158 158 "158" t159 159 "159" t160 160 "160" t161 161 "161" t162 162 "162" t163 163 "163" t164 164 "164" t165 165 "165" t166 166 "166" t167 167 "167" t168 168 "168" t169 169 "169" t170 170 "170" t171 171 "171" t172 172 "172" t173 173 "173" t174 174 "174" t175 175 "175" t176 176 "176" t177 177 "177" t178 178 "178" t179 179 "179" t180 180 "180" t181 181 "181" t182 182 "182" t183 183 "183" t184 184 "184" t185 185 "185" t186 186 "186" t187 187 "187" t188 188 "188" t189 189 "189" t190 190 "190" t191 191 "191" t192 192 "192" t193 193 "193" t194 194 "194" t195 195 "195" t196 196 "196" t197 197 "197" t198 198 "198" t199 199 "199" t200 200 "200" t201 201 "201" t202 202 "202" t203 203 "203" t204 204 "204" t205 205 "205" t206 206 "206" t207 207 "207" t208 208 "208" t209 209 "209" t210 210 "210" t211 211 "211" t212 212 "212" t213 213 "213" t214 214 "214" t215 215 "215" t216 216 "216" t217 217 "217" t218 218 "218" t219 219 "219" t220 220 "220" t221 221 "221" t222 222 "222" t223 223 "223" t224 224 "224" t225 225 "225" t226 226 "226" t227 227 "227" t228 228 "228" t229 229 "229" t230 230 "230" t231 231 "231" t232 232 "232" t233 233 "233" t234 234 "234" t235 235 "235" t236 236 "236" t237 237 "237" t238 238 "238" t239 239 "239" t240 240 "240" t241 241 "241" t242 242 "242" t243 243 "243" t244 244 "244" t245 245 "245" t246 246 "246" t247 247 "247" t248 248 "248" t249 249 "249" t250 250 "250" t251 251 "251" t252 252 "252" t253 253 "253" t254 254 "254" t255 255 "255" t256 256 "256" t257 257 "257" t258 258 "258" t259 259 "259" t260 260 "260" t261 261 "261" t262 262 "262" t263 263 "263" t264 264 "264" t265 265 "265" t266 266 "266" t267 267 "267" t268 268 "268" t269 269 "269" t270 270 "270" t271 271 "271" t272 272 "272" t273 273 "273" t274 274 "274" t275 275 "275" t276 276 "276" t277 277 "277" t278 278 "278" t279 279 "279" t280 280 "280" t281 281 "281" t282 282 "282" t283 283 "283" t284 284 "284" t285 285 "285" t286 286 "286" t287 287 "287" t288 288 "288" t289 289 "289" t290 290 "290" t291 291 "291" t292 292 "292" t293 293 "293" t294 294 "294" t295 295 "295" t296 296 "296" t297 297 "297" t298 298 "298" t299 299 "299" t300 300 "300" t301 301 "301" t302 302 "302" t303 303 "303" t304 304 "304" t305 305 "305" t306 306 "306" t307 307 "307" t308 308 "308" t309 309 "309" t310 310 "310" t311 311 "311" t312 312 "312" t313 313 "313" t314 314 "314" t315 315 "315" t316 316 "316" t317 317 "317" t318 318 "318" t319 319 "319" t320 320 "320" t321 321 "321" t322 322 "322" t323 323 "323" t324 324 "324" t325 325 "325" t326 326 "326" t327 327 "327" t328 328 "328" t329 329 "329" t330 330 "330" t331 331 "331" t332 332 "332" t333 333 "333" t334 334 "334" t335 335 "335" t336 336 "336" t337 337 "337" t338 338 "338" t339 339 "339" t340 340 "340" t341 341 "341" t342 342 "342" t343 343 "343" t344 344 "344" t345 345 "345" t346 346 "346" t347 347 "347" t348 348 "348" t349 349 "349" t350 350 "350" t351 351 "351" t352 352 "352" t353 353 "353" t354 354 "354" t355 355 "355" t356 356 "356" t357 357 "357" t358 358 "358" t359 359 "359" t360 360 "360" t361 361 "361" t362 362 "362" t363 363 "363" t364 364 "364" t365 365 "365" t366 366 "366" t367 367 "367" t368 368 "368" t369 369 "369" t370 370 "370" t371 371 "371" t372 372 "372" t373 373 "373" t374 374 "374" t375 375 "375" t376 376 "376" t377 377 "377" t378 378 "378" t379 379 "379" t380 380 "380" t381 381 "381" t382 382 "382" t383 383 "383" t384 384 "384" t385 385 "385" t386 386 "386" t387 387 "387" t388 388 "388" t389 389 "389" t390 390 "390" t391 391 "391" t392 392 "392" t393 393 "393" t394 394 "394" t395 395 "395" t396 396 "396" t397 397 "397" t398 398 "398" t399 399 "399" t400 400 "400" t401 401 "401" t402 402 "402" t403 403 "403" t404 404 "404" t405 405 "405" t406 406 "406" t407 407 "407" t408 408 "408" t409 409 "409" t410 410 "410" t411 411 "411" t412 412 "412" t413 413 "413" t414 414 "414" t415 415 "415" t416 416 "416" t417 417 "417" t418 418 "418" t419 419 "419" t420 420 "420" t421 421 "421" t422 422 "422" t423 423 "423" t424 424 "424" t425 425 "425" t426 426 "426" t427 427 "427" t428 428 "428" t429 429 "429" t430 430 "430" t431 431 "431" t432 432 "432" t433 433 "433" t434 434 "434" t435 435 "435" t436 436 "436" t437 437 "437" t438 438 "438" t439 439 "439" t440 440 "440" t441 441 "441" t442 442 "442" t443 443 "443" t444 444 "444" t445 445 "445" t446 446 "446" t447 447 "447" t448 448 "448" t449 449 "449" t450 450 "450" t451 451 "451" t452 452 "452" t453 453 "453" t454 454 "454" t455 455 "455" t456 456 "456" t457 457 "457" t458 458 "458" t459 459 "459" t460 460 "460" t461 461 "461" t462 462 "462" t463 463 "463" t464 464 "464" t465 465 "465" t466 466 "466" t467 467 "467" t468 468 "468" t469 469 "469" t470 470 "470" t471 471 "471" t472 472 "472" t473 473 "473" t474 474 "474" t475 475 "475" t476 476 "476" t477 477 "477" t478 478 "478" t479 479 "479" t480 480 "480" t481 481 "481" t482 482 "482" t483 483 "483" t484 484 "484" t485 485 "485" t486 486 "486" t487 487 "487" t488 488 "488" t489 489 "489" t490 490 "490" t491 491 "491" t492 492 "492" t493 493 "493" t494 494 "494" t495 495 "495" t496 496 "496" t497 497 "497" t498 498 "498" t499 499 "499" t500 500 "500" t501 501 "501" t502 502 "502" t503 503 "503" t504 504 "504" t505 505 "505" t506 506 "506" t507 507 "507" t508 508 "508" t509 509 "509" t510 510 "510" t511 511 "511" t512 512 "512" t513 513 "513" t514 514 "514" t515 515 "515" t516 516 "516" t517 517 "517" t518 518 "518" t519 519 "519" t520 520 "520" t521 521 "521" t522 522 "522" t523 523 "523" t524 524 "524" t525 525 "525" t526 526 "526" t527 527 "527" t528 528 "528" t529 529 "529" t530 530 "530" t531 531 "531" t532 532 "532" t533 533 "533" t534 534 "534" t535 535 "535" t536 536 "536" t537 537 "537" t538 538 "538" t539 539 "539" t540 540 "540" t541 541 "541" t542 542 "542" t543 543 "543" t544 544 "544" t545 545 "545" t546 546 "546" t547 547 "547" t548 548 "548" t549 549 "549" t550 550 "550" t551 551 "551" t552 552 "552" t553 553 "553" t554 554 "554" t555 555 "555" t556 556 "556" t557 557 "557" t558 558 "558" t559 559 "559" t560 560 "560" t561 561 "561" t562 562 "562" t563 563 "563" t564 564 "564" t565 565 "565" t566 566 "566" t567 567 "567" t568 568 "568" t569 569 "569" t570 570 "570" t571 571 "571" t572 572 "572" t573 573 "573" t574 574 "574" t575 575 "575" t576 576 "576" t577 577 "577" t578 578 "578" t579 579 "579" t580 580 "580" t581 581 "581" t582 582 "582" t583 583 "583" t584 584 "584" t585 585 "585" t586 586 "586" t587 587 "587" t588 588 "588" t589 589 "589" t590 590 "590" t591 591 "591" t592 592 "592" t593 593 "593" t594 594 "594" t595 595 "595" t596 596 "596" t597 597 "597" t598 598 "598" t599 599 "599" t600 600 "600" t601 601 "601" t602 602 "602" t603 603 "603" t604 604 "604" t605 605 "605" t606 606 "606" t607 607 "607" t608 608 "608" t609 609 "609" t610 610 "610" t611 611 "611" t612 612 "612" t613 613 "613" t614 614 "614" t615 615 "615" t616 616 "616" t617 617 "617" t618 618 "618" t619 619 "619" t620 620 "620" t621 621 "621" t622 622 "622" t623 623 "623" t624 624 "624" t625 625 "625" t626 626 "626" t627 627 "627" t628 628 "628" t629 629 "629" t630 630 "630" t631 631 "631" t632 632 "632" t633 633 "633" t634 634 "634" t635 635 "635" t636 636 "636" t637 637 "637" t638 638 "638" t639 639 "639" t640 640 "640" t641 641 "641" t642 642 "642" t643 643 "643" t644 644 "644" t645 645 "645" t646 646 "646" t647 647 "647" t648 648 "648" t649 649 "649" t650 650 "650" t651 651 "651" t652 652 "652" t653 653 "653" t654 654 "654" t655 655 "655" t656 656 "656" t657 657 "657" t658 658 "658" t659 659 "659" t660 660 "660" t661 661 "661" t662 662 "662" t663 663 "663" t664 664 "664" t665 665 "665" t666 666 "666" t667 667 "667" t668 668 "668" t669 669 "669" t670 670 "670" t671 671 "671" t672 672 "672" t673 673 "673" t674 674 "674" t675 675 "675" t676 676 "676" t677 677 "677" t678 678 "678" t679 679 "679" t680 680 "680" t681 681 "681" t682 682 "682" t683 683 "683" t684 684 "684" t685 685 "685" t686 686 "686" t687 687 "687" t688 688 "688" t689 689 "689" t690 690 "690" t691 691 "691" t692 692 "692" t693 693 "693" t694 694 "694" t695 695 "695" t696 696 "696" t697 697 "697" t698 698 "698" t699 699 "699" t700 700 "700" t701 701 "701" t702 702 "702" t703 703 "703" t704 704 "704" t705 705 "705" t706 706 "706" t707 707 "707" t708 708 "708" t709 709 "709" t710 710 "710" t711 711 "711" t712 712 "712" t713 713 "713" t714 714 "714" t715 715 "715" t716 716 "716" t717 717 "717" t718 718 "718" t719 719 "719" t720 720 "720" t721 721 "721" t722 722 "722" t723 723 "723" t724 724 "724" t725 725 "725" t726 726 "726" t727 727 "727" t728 728 "728" t729 729 "729" t730 730 "730" t731 731 "731" t732 732 "732" t733 733 "733" t734 734 "734" t735 735 "735" t736 736 "736" t737 737 "737" t738 738 "738" t739 739 "739" t740 740 "740" t741 741 "741" t742 742 "742" t743 743 "743" t744 744 "744" t745 745 "745" t746 746 "746" t747 747 "747" t748 748 "748" t749 749 "749" t750 750 "750" t751 751 "751" t752 752 "752" t753 753 "753" t754 754 "754" t755 755 "755" t756 756 "756" t757 757 "757" t758 758 "758" t759 759 "759" t760 760 "760" t761 761 "761" t762 762 "762" t763 763 "763" t764 764 "764" t765 765 "765" t766 766 "766" t767 767 "767" t768 768 "768" t769 769 "769" t770 770 "770" t771 771 "771" t772 772 "772" t773 773 "773" t774 774 "774" t775 775 "775" t776 776 "776" t777 777 "777" t778 778 "778" t779 779 "779" t780 780 "780" t781 781 "781" t782 782 "782" t783 783 "783" t784 784 "784" t785 785 "785" t786 786 "786" t787 787 "787" t788 788 "788" t789 789 "789" t790 790 "790" t791 791 "791" t792 792 "792" t793 793 "793" t794 794 "794" t795 795 "795" t796 796 "796" t797 797 "797" t798 798 "798" t799 799 "799" t800 800 "800" t801 801 "801" t802 802 "802" t803 803 "803" t804 804 "804" t805 805 "805" t806 806 "806" t807 807 "807" t808 808 "808" t809 809 "809" t810 810 "810" t811 811 "811" t812 812 "812" t813 813 "813" t814 814 "814" t815 815 "815" t816 816 "816" t817 817 "817" t818 818 "818" t819 819 "819" t820 820 "820" t821 821 "821" t822 822 "822" t823 823 "823" t824 824 "824" t825 825 "825" t826 826 "826" t827 827 "827" t828 828 "828" t829 829 "829" t830 830 "830" t831 831 "831" t832 832 "832" t833 833 "833" t834 834 "834" t835 835 "835" t836 836 "836" t837 837 "837" t838 838 "838" t839 839 "839" t840 840 "840" t841 841 "841" t842 842 "842" t843 843 "843" t844 844 "844" t845 845 "845" t846 846 "846" t847 847 "847" t848 848 "848" t849 849 "849" t850 850 "850" t851 851 "851" t852 852 "852" t853 853 "853" t854 854 "854" t855 855 "855" t856 856 "856" t857 857 "857" t858 858 "858" t859 859 "859" t860 860 "860" t861 861 "861" t862 862 "862" t863 863 "863" t864 864 "864" t865 865 "865" t866 866 "866" t867 867 "867" t868 868 "868" t869 869 "869" t870 870 "870" t871 871 "871" t872 872 "872" t873 873 "873" t874 874 "874" t875 875 "875" t876 876 "876" t877 877 "877" t878 878 "878" t879 879 "879" t880 880 "880" t881 881 "881" t882 882 "882" t883 883 "883" t884 884 "884" t885 885 "885" t886 886 "886" t887 887 "887" t888 888 "888" t889 889 "889" t890 890 "890" t891 891 "891" t892 892 "892" t893 893 "893" t894 894 "894" t895 895 "895" t896 896 "896" t897 897 "897" t898 898 "898" t899 899 "899" t900 900 "900" t901 901 "901" t902 902 "902" t903 903 "903" t904 904 "904" t905 905 "905" t906 906 "906" t907 907 "907" t908 908 "908" t909 909 "909" t910 910 "910" t911 911 "911" t912 912 "912" t913 913 "913" t914 914 "914" t915 915 "915" t916 916 "916" t917 917 "917" t918 918 "918" t919 919 "919" t920 920 "920" t921 921 "921" t922 922 "922" t923 923 "923" t924 924 "924" t925 925 "925" t926 926 "926" t927 927 "927" t928 928 "928" t929 929 "929" t930 930 "930" t931 931 "931" t932 932 "932" t933 933 "933" t934 934 "934" t935 935 "935" t936 936 "936" t937 937 "937" t938 938 "938" t939 939 "939" t940 940 "940" t941 941 "941" t942 942 "942" t943 943 "943" t944 944 "944" t945 945 "945" t946 946 "946" t947 947 "947" t948 948 "948" t949 949 "949" t950 950 "950" t951 951 "951" t952 952 "952" t953 953 "953" t954 954 "954" t955 955 "955" t956 956 "956" t957 957 "957" t958 958 "958" t959 959 "959" t960 960 "960" t961 961 "961" t962 962 "962" t963 963 "963" t964 964 "964" t965 965 "965" t966 966 "966" t967 967 "967" t968 968 "968" t969 969 "969" t970 970 "970" t971 971 "971" t972 972 "972" t973 973 "973" t974 974 "974" t975 975 "975" t976 976 "976" t977 977 "977" t978 978 "978" t979 979 "979" t980 980 "980" t981 981 "981" t982 982 "982" t983 983 "983" t984 984 "984" t985 985 "985" t986 986 "986" t987 987 "987" t988 988 "988" t989 989 "989" t990 990 "990" t991 991 "991" t992 992 "992" t993 993 "993" t994 994 "994" t995 995 "995" t996 996 "996" t997 997 "997" t998 998 "998" t999 999 "999" t1000 1000 "1000" %% exp: "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" "184" "185" "186" "187" "188" "189" "190" "191" "192" "193" "194" "195" "196" "197" "198" "199" "200" "201" "202" "203" "204" "205" "206" "207" "208" "209" "210" "211" "212" "213" "214" "215" "216" "217" "218" "219" "220" "221" "222" "223" "224" "225" "226" "227" "228" "229" "230" "231" "232" "233" "234" "235" "236" "237" "238" "239" "240" "241" "242" "243" "244" "245" "246" "247" "248" "249" "250" "251" "252" "253" "254" "255" "256" "257" "258" "259" "260" "261" "262" "263" "264" "265" "266" "267" "268" "269" "270" "271" "272" "273" "274" "275" "276" "277" "278" "279" "280" "281" "282" "283" "284" "285" "286" "287" "288" "289" "290" "291" "292" "293" "294" "295" "296" "297" "298" "299" "300" "301" "302" "303" "304" "305" "306" "307" "308" "309" "310" "311" "312" "313" "314" "315" "316" "317" "318" "319" "320" "321" "322" "323" "324" "325" "326" "327" "328" "329" "330" "331" "332" "333" "334" "335" "336" "337" "338" "339" "340" "341" "342" "343" "344" "345" "346" "347" "348" "349" "350" "351" "352" "353" "354" "355" "356" "357" "358" "359" "360" "361" "362" "363" "364" "365" "366" "367" "368" "369" "370" "371" "372" "373" "374" "375" "376" "377" "378" "379" "380" "381" "382" "383" "384" "385" "386" "387" "388" "389" "390" "391" "392" "393" "394" "395" "396" "397" "398" "399" "400" "401" "402" "403" "404" "405" "406" "407" "408" "409" "410" "411" "412" "413" "414" "415" "416" "417" "418" "419" "420" "421" "422" "423" "424" "425" "426" "427" "428" "429" "430" "431" "432" "433" "434" "435" "436" "437" "438" "439" "440" "441" "442" "443" "444" "445" "446" "447" "448" "449" "450" "451" "452" "453" "454" "455" "456" "457" "458" "459" "460" "461" "462" "463" "464" "465" "466" "467" "468" "469" "470" "471" "472" "473" "474" "475" "476" "477" "478" "479" "480" "481" "482" "483" "484" "485" "486" "487" "488" "489" "490" "491" "492" "493" "494" "495" "496" "497" "498" "499" "500" "501" "502" "503" "504" "505" "506" "507" "508" "509" "510" "511" "512" "513" "514" "515" "516" "517" "518" "519" "520" "521" "522" "523" "524" "525" "526" "527" "528" "529" "530" "531" "532" "533" "534" "535" "536" "537" "538" "539" "540" "541" "542" "543" "544" "545" "546" "547" "548" "549" "550" "551" "552" "553" "554" "555" "556" "557" "558" "559" "560" "561" "562" "563" "564" "565" "566" "567" "568" "569" "570" "571" "572" "573" "574" "575" "576" "577" "578" "579" "580" "581" "582" "583" "584" "585" "586" "587" "588" "589" "590" "591" "592" "593" "594" "595" "596" "597" "598" "599" "600" "601" "602" "603" "604" "605" "606" "607" "608" "609" "610" "611" "612" "613" "614" "615" "616" "617" "618" "619" "620" "621" "622" "623" "624" "625" "626" "627" "628" "629" "630" "631" "632" "633" "634" "635" "636" "637" "638" "639" "640" "641" "642" "643" "644" "645" "646" "647" "648" "649" "650" "651" "652" "653" "654" "655" "656" "657" "658" "659" "660" "661" "662" "663" "664" "665" "666" "667" "668" "669" "670" "671" "672" "673" "674" "675" "676" "677" "678" "679" "680" "681" "682" "683" "684" "685" "686" "687" "688" "689" "690" "691" "692" "693" "694" "695" "696" "697" "698" "699" "700" "701" "702" "703" "704" "705" "706" "707" "708" "709" "710" "711" "712" "713" "714" "715" "716" "717" "718" "719" "720" "721" "722" "723" "724" "725" "726" "727" "728" "729" "730" "731" "732" "733" "734" "735" "736" "737" "738" "739" "740" "741" "742" "743" "744" "745" "746" "747" "748" "749" "750" "751" "752" "753" "754" "755" "756" "757" "758" "759" "760" "761" "762" "763" "764" "765" "766" "767" "768" "769" "770" "771" "772" "773" "774" "775" "776" "777" "778" "779" "780" "781" "782" "783" "784" "785" "786" "787" "788" "789" "790" "791" "792" "793" "794" "795" "796" "797" "798" "799" "800" "801" "802" "803" "804" "805" "806" "807" "808" "809" "810" "811" "812" "813" "814" "815" "816" "817" "818" "819" "820" "821" "822" "823" "824" "825" "826" "827" "828" "829" "830" "831" "832" "833" "834" "835" "836" "837" "838" "839" "840" "841" "842" "843" "844" "845" "846" "847" "848" "849" "850" "851" "852" "853" "854" "855" "856" "857" "858" "859" "860" "861" "862" "863" "864" "865" "866" "867" "868" "869" "870" "871" "872" "873" "874" "875" "876" "877" "878" "879" "880" "881" "882" "883" "884" "885" "886" "887" "888" "889" "890" "891" "892" "893" "894" "895" "896" "897" "898" "899" "900" "901" "902" "903" "904" "905" "906" "907" "908" "909" "910" "911" "912" "913" "914" "915" "916" "917" "918" "919" "920" "921" "922" "923" "924" "925" "926" "927" "928" "929" "930" "931" "932" "933" "934" "935" "936" "937" "938" "939" "940" "941" "942" "943" "944" "945" "946" "947" "948" "949" "950" "951" "952" "953" "954" "955" "956" "957" "958" "959" "960" "961" "962" "963" "964" "965" "966" "967" "968" "969" "970" "971" "972" "973" "974" "975" "976" "977" "978" "979" "980" "981" "982" "983" "984" "985" "986" "987" "988" "989" "990" "991" "992" "993" "994" "995" "996" "997" "998" "999" "1000" ; %% #include /* A C error reporting function. */ /* !POSIX */ static void yyerror (const char *msg) { fprintf (stderr, "%s\n", msg); } static int yylex (void) { static int counter = 1; if (counter <= MAX) return counter++; assert (counter++ == MAX + 1); return 0; } #include /* getenv. */ #include /* strcmp. */ int main (int argc, char const* argv[]) { (void) argc; (void) argv; return yyparse (); } ./torture.at:236: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y stdout: %code top { /* -*- c -*- */ /* Adjust to the compiler. We used to do it here, but each time we add a new line, we have to adjust all the line numbers in error messages. It's simpler to use a constant include to a varying file. */ #include } %define parse.error verbose %debug %{ #include #include #include #define MAX 200 static int yylex (void); #include /* !POSIX */ static void yyerror (const char *msg); %} %union { int val; }; %token END "end" %type exp input %token t1 1 "1" %token t2 2 "2" %token t3 3 "3" %token t4 4 "4" %token t5 5 "5" %token t6 6 "6" %token t7 7 "7" %token t8 8 "8" %token t9 9 "9" %token t10 10 "10" %token t11 11 "11" %token t12 12 "12" %token t13 13 "13" %token t14 14 "14" %token t15 15 "15" %token t16 16 "16" %token t17 17 "17" %token t18 18 "18" %token t19 19 "19" %token t20 20 "20" %token t21 21 "21" %token t22 22 "22" %token t23 23 "23" %token t24 24 "24" %token t25 25 "25" %token t26 26 "26" %token t27 27 "27" %token t28 28 "28" %token t29 29 "29" %token t30 30 "30" %token t31 31 "31" %token t32 32 "32" %token t33 33 "33" %token t34 34 "34" %token t35 35 "35" %token t36 36 "36" %token t37 37 "37" %token t38 38 "38" %token t39 39 "39" %token t40 40 "40" %token t41 41 "41" %token t42 42 "42" %token t43 43 "43" %token t44 44 "44" %token t45 45 "45" %token t46 46 "46" %token t47 47 "47" %token t48 48 "48" %token t49 49 "49" %token t50 50 "50" %token t51 51 "51" %token t52 52 "52" %token t53 53 "53" %token t54 54 "54" %token t55 55 "55" %token t56 56 "56" %token t57 57 "57" %token t58 58 "58" %token t59 59 "59" %token t60 60 "60" %token t61 61 "61" %token t62 62 "62" %token t63 63 "63" %token t64 64 "64" %token t65 65 "65" %token t66 66 "66" %token t67 67 "67" %token t68 68 "68" %token t69 69 "69" %token t70 70 "70" %token t71 71 "71" %token t72 72 "72" %token t73 73 "73" %token t74 74 "74" %token t75 75 "75" %token t76 76 "76" %token t77 77 "77" %token t78 78 "78" %token t79 79 "79" %token t80 80 "80" %token t81 81 "81" %token t82 82 "82" %token t83 83 "83" %token t84 84 "84" %token t85 85 "85" %token t86 86 "86" %token t87 87 "87" %token t88 88 "88" %token t89 89 "89" %token t90 90 "90" %token t91 91 "91" %token t92 92 "92" %token t93 93 "93" %token t94 94 "94" %token t95 95 "95" %token t96 96 "96" %token t97 97 "97" %token t98 98 "98" %token t99 99 "99" %token t100 100 "100" %token t101 101 "101" %token t102 102 "102" %token t103 103 "103" %token t104 104 "104" %token t105 105 "105" %token t106 106 "106" %token t107 107 "107" %token t108 108 "108" %token t109 109 "109" %token t110 110 "110" %token t111 111 "111" %token t112 112 "112" %token t113 113 "113" %token t114 114 "114" %token t115 115 "115" %token t116 116 "116" %token t117 117 "117" %token t118 118 "118" %token t119 119 "119" %token t120 120 "120" %token t121 121 "121" %token t122 122 "122" %token t123 123 "123" %token t124 124 "124" %token t125 125 "125" %token t126 126 "126" %token t127 127 "127" %token t128 128 "128" %token t129 129 "129" %token t130 130 "130" %token t131 131 "131" %token t132 132 "132" %token t133 133 "133" %token t134 134 "134" %token t135 135 "135" %token t136 136 "136" %token t137 137 "137" %token t138 138 "138" %token t139 139 "139" %token t140 140 "140" %token t141 141 "141" %token t142 142 "142" %token t143 143 "143" %token t144 144 "144" %token t145 145 "145" %token t146 146 "146" %token t147 147 "147" %token t148 148 "148" %token t149 149 "149" %token t150 150 "150" %token t151 151 "151" %token t152 152 "152" %token t153 153 "153" %token t154 154 "154" %token t155 155 "155" %token t156 156 "156" %token t157 157 "157" %token t158 158 "158" %token t159 159 "159" %token t160 160 "160" %token t161 161 "161" %token t162 162 "162" %token t163 163 "163" %token t164 164 "164" %token t165 165 "165" %token t166 166 "166" %token t167 167 "167" %token t168 168 "168" %token t169 169 "169" %token t170 170 "170" %token t171 171 "171" %token t172 172 "172" %token t173 173 "173" %token t174 174 "174" %token t175 175 "175" %token t176 176 "176" %token t177 177 "177" %token t178 178 "178" %token t179 179 "179" %token t180 180 "180" %token t181 181 "181" %token t182 182 "182" %token t183 183 "183" %token t184 184 "184" %token t185 185 "185" %token t186 186 "186" %token t187 187 "187" %token t188 188 "188" %token t189 189 "189" %token t190 190 "190" %token t191 191 "191" %token t192 192 "192" %token t193 193 "193" %token t194 194 "194" %token t195 195 "195" %token t196 196 "196" %token t197 197 "197" %token t198 198 "198" %token t199 199 "199" %token t200 200 "200" %% input: exp { assert ($1 == 0); $$ = $1; } | input exp { assert ($2 == $1 + 1); $$ = $2; } ; exp: END { $$ = 0; } | "1" END { $$ = 1; } | "1" "2" END { $$ = 2; } | "1" "2" "3" END { $$ = 3; } | "1" "2" "3" "4" END { $$ = 4; } | "1" "2" "3" "4" "5" END { $$ = 5; } | "1" "2" "3" "4" "5" "6" END { $$ = 6; } | "1" "2" "3" "4" "5" "6" "7" END { $$ = 7; } | "1" "2" "3" "4" "5" "6" "7" "8" END { $$ = 8; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" END { $$ = 9; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" END { $$ = 10; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" END { $$ = 11; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" END { $$ = 12; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" END { $$ = 13; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" END { $$ = 14; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" END { $$ = 15; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" END { $$ = 16; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" END { $$ = 17; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" END { $$ = 18; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" END { $$ = 19; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" END { $$ = 20; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" END { $$ = 21; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" END { $$ = 22; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" END { $$ = 23; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" END { $$ = 24; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" END { $$ = 25; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" END { $$ = 26; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" END { $$ = 27; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" END { $$ = 28; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" END { $$ = 29; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" END { $$ = 30; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" END { $$ = 31; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" END { $$ = 32; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" END { $$ = 33; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" END { $$ = 34; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" END { $$ = 35; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" END { $$ = 36; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" END { $$ = 37; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" END { $$ = 38; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" END { $$ = 39; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" END { $$ = 40; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" END { $$ = 41; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" END { $$ = 42; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" END { $$ = 43; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" END { $$ = 44; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" END { $$ = 45; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" END { $$ = 46; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" END { $$ = 47; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" END { $$ = 48; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" END { $$ = 49; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" END { $$ = 50; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" END { $$ = 51; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" END { $$ = 52; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" END { $$ = 53; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" END { $$ = 54; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" END { $$ = 55; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" END { $$ = 56; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" END { $$ = 57; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" END { $$ = 58; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" END { $$ = 59; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" END { $$ = 60; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" END { $$ = 61; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" END { $$ = 62; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" END { $$ = 63; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" END { $$ = 64; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" END { $$ = 65; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" END { $$ = 66; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" END { $$ = 67; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" END { $$ = 68; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" END { $$ = 69; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" END { $$ = 70; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" END { $$ = 71; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" END { $$ = 72; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" END { $$ = 73; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" END { $$ = 74; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" END { $$ = 75; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" END { $$ = 76; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" END { $$ = 77; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" END { $$ = 78; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" END { $$ = 79; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" END { $$ = 80; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" END { $$ = 81; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" END { $$ = 82; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" END { $$ = 83; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" END { $$ = 84; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" END { $$ = 85; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" END { $$ = 86; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" END { $$ = 87; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" END { $$ = 88; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" END { $$ = 89; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" END { $$ = 90; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" END { $$ = 91; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" END { $$ = 92; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" END { $$ = 93; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" END { $$ = 94; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" END { $$ = 95; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" END { $$ = 96; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" END { $$ = 97; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" END { $$ = 98; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" END { $$ = 99; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" END { $$ = 100; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" END { $$ = 101; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" END { $$ = 102; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" END { $$ = 103; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" END { $$ = 104; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" END { $$ = 105; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" END { $$ = 106; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" END { $$ = 107; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" END { $$ = 108; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" END { $$ = 109; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" END { $$ = 110; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" END { $$ = 111; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" END { $$ = 112; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" END { $$ = 113; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" END { $$ = 114; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" END { $$ = 115; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" END { $$ = 116; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" END { $$ = 117; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" END { $$ = 118; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" END { $$ = 119; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" END { $$ = 120; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" END { $$ = 121; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" END { $$ = 122; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" END { $$ = 123; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" END { $$ = 124; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" END { $$ = 125; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" END { $$ = 126; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" END { $$ = 127; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" END { $$ = 128; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" END { $$ = 129; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" END { $$ = 130; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" END { $$ = 131; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" END { $$ = 132; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" END { $$ = 133; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" END { $$ = 134; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" END { $$ = 135; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" END { $$ = 136; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" END { $$ = 137; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" END { $$ = 138; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" END { $$ = 139; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" END { $$ = 140; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" END { $$ = 141; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" END { $$ = 142; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" END { $$ = 143; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" END { $$ = 144; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" END { $$ = 145; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" END { $$ = 146; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" END { $$ = 147; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" END { $$ = 148; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" END { $$ = 149; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" END { $$ = 150; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" END { $$ = 151; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" END { $$ = 152; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" END { $$ = 153; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" END { $$ = 154; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" END { $$ = 155; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" END { $$ = 156; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" END { $$ = 157; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" END { $$ = 158; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" END { $$ = 159; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" END { $$ = 160; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" END { $$ = 161; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" END { $$ = 162; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" END { $$ = 163; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" END { $$ = 164; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" END { $$ = 165; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" END { $$ = 166; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" END { $$ = 167; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" END { $$ = 168; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" END { $$ = 169; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" END { $$ = 170; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" END { $$ = 171; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" END { $$ = 172; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" END { $$ = 173; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" END { $$ = 174; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" END { $$ = 175; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" END { $$ = 176; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" END { $$ = 177; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" END { $$ = 178; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" END { $$ = 179; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" END { $$ = 180; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" END { $$ = 181; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" END { $$ = 182; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" END { $$ = 183; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" "184" END { $$ = 184; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" "184" "185" END { $$ = 185; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" "184" "185" "186" END { $$ = 186; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" "184" "185" "186" "187" END { $$ = 187; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" "184" "185" "186" "187" "188" END { $$ = 188; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" "184" "185" "186" "187" "188" "189" END { $$ = 189; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" "184" "185" "186" "187" "188" "189" "190" END { $$ = 190; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" "184" "185" "186" "187" "188" "189" "190" "191" END { $$ = 191; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" "184" "185" "186" "187" "188" "189" "190" "191" "192" END { $$ = 192; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" "184" "185" "186" "187" "188" "189" "190" "191" "192" "193" END { $$ = 193; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" "184" "185" "186" "187" "188" "189" "190" "191" "192" "193" "194" END { $$ = 194; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" "184" "185" "186" "187" "188" "189" "190" "191" "192" "193" "194" "195" END { $$ = 195; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" "184" "185" "186" "187" "188" "189" "190" "191" "192" "193" "194" "195" "196" END { $$ = 196; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" "184" "185" "186" "187" "188" "189" "190" "191" "192" "193" "194" "195" "196" "197" END { $$ = 197; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" "184" "185" "186" "187" "188" "189" "190" "191" "192" "193" "194" "195" "196" "197" "198" END { $$ = 198; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" "184" "185" "186" "187" "188" "189" "190" "191" "192" "193" "194" "195" "196" "197" "198" "199" END { $$ = 199; } | "1" "2" "3" "4" "5" "6" "7" "8" "9" "10" "11" "12" "13" "14" "15" "16" "17" "18" "19" "20" "21" "22" "23" "24" "25" "26" "27" "28" "29" "30" "31" "32" "33" "34" "35" "36" "37" "38" "39" "40" "41" "42" "43" "44" "45" "46" "47" "48" "49" "50" "51" "52" "53" "54" "55" "56" "57" "58" "59" "60" "61" "62" "63" "64" "65" "66" "67" "68" "69" "70" "71" "72" "73" "74" "75" "76" "77" "78" "79" "80" "81" "82" "83" "84" "85" "86" "87" "88" "89" "90" "91" "92" "93" "94" "95" "96" "97" "98" "99" "100" "101" "102" "103" "104" "105" "106" "107" "108" "109" "110" "111" "112" "113" "114" "115" "116" "117" "118" "119" "120" "121" "122" "123" "124" "125" "126" "127" "128" "129" "130" "131" "132" "133" "134" "135" "136" "137" "138" "139" "140" "141" "142" "143" "144" "145" "146" "147" "148" "149" "150" "151" "152" "153" "154" "155" "156" "157" "158" "159" "160" "161" "162" "163" "164" "165" "166" "167" "168" "169" "170" "171" "172" "173" "174" "175" "176" "177" "178" "179" "180" "181" "182" "183" "184" "185" "186" "187" "188" "189" "190" "191" "192" "193" "194" "195" "196" "197" "198" "199" "200" END { $$ = 200; } ; %% /* A C error reporting function. */ /* !POSIX */ static void yyerror (const char *msg) { fprintf (stderr, "%s\n", msg); } static int yylex (void) { static int inner = 1; static int outer = 0; if (outer > MAX) return 0; else if (inner > outer) { inner = 1; ++outer; return END; } return inner++; } #include /* getenv. */ #include /* strcmp. */ int main (int argc, char const* argv[]) { (void) argc; (void) argv; return yyparse (); } ./torture.at:139: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -v -o input.c input.y stderr: In file included from /usr/include/c++/12/vector:70, from calc.hh:57, from calc.cc:78: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:1811:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:1633:58, inlined from 'void calc::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const calc::parser::value_type&, const calc::parser::location_type&)' at calc.cc:2596:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:1811:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at calc.cc:1647:66, inlined from 'void calc::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, calc::parser::glr_state*, calc::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2083:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at calc.cc:1811:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at calc.cc:1633:58, inlined from 'void calc::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, calc::parser::glr_state*, {anonymous}::rule_num)' at calc.cc:2579:58, inlined from 'YYRESULTTAG calc::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at calc.cc:2567:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./calc.at:1494: "$PERL" -ne ' chomp; print "$ARGV:$.: {$_}\n" if (# No starting/ending empty lines. (eof || $. == 1) && /^\s*$/ # No trailing space. || /\s$/ # No tabs. || /\t/ )' calc.cc calc.hh input: | 1 + 2 * 3 = 7 | 1 + 2 * -3 = -5 | | -1^2 = -1 | (-1)^2 = 1 | | ---1 = -1 | | 1 - 2 - 3 = -4 | 1 - (2 - 3) = 2 | | 2^2^3 = 256 | (2^2)^3 = 64 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (14.1: ) Entering state 16 Cleanup: popping token end of input (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token number (1.13: 7) Shifting token number (1.13: 7) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.13: 7) -> $$ = nterm exp (1.13: 7) Entering state 27 Reading a token Next token is token '\n' (1.14-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 7) $2 = token '=' (1.11: ) $3 = nterm exp (1.13: 7) -> $$ = nterm exp (1.1-13: 7) Entering state 8 Next token is token '\n' (1.14-2.0: ) Shifting token '\n' (1.14-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-13: 7) $2 = token '\n' (1.14-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token number (2.1: 1) Shifting token number (2.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.1: 1) -> $$ = nterm exp (2.1: 1) Entering state 8 Reading a token Next token is token '+' (2.3: ) Shifting token '+' (2.3: ) Entering state 20 Reading a token Next token is token number (2.5: 2) Shifting token number (2.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.5: 2) -> $$ = nterm exp (2.5: 2) Entering state 29 Reading a token Next token is token '*' (2.7: ) Shifting token '*' (2.7: ) Entering state 21 Reading a token Next token is token '-' (2.9: ) Shifting token '-' (2.9: ) Entering state 2 Reading a token Next token is token number (2.10: 3) Shifting token number (2.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.10: 3) -> $$ = nterm exp (2.10: 3) Entering state 10 Reading a token Next token is token '=' (2.12: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.9: ) $2 = nterm exp (2.10: 3) -> $$ = nterm exp (2.9-10: -3) Entering state 30 Next token is token '=' (2.12: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (2.5: 2) $2 = token '*' (2.7: ) $3 = nterm exp (2.9-10: -3) -> $$ = nterm exp (2.5-10: -6) Entering state 29 Next token is token '=' (2.12: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (2.1: 1) $2 = token '+' (2.3: ) $3 = nterm exp (2.5-10: -6) -> $$ = nterm exp (2.1-10: -5) Entering state 8 Next token is token '=' (2.12: ) Shifting token '=' (2.12: ) Entering state 18 Reading a token Next token is token '-' (2.14: ) Shifting token '-' (2.14: ) Entering state 2 Reading a token Next token is token number (2.15: 5) Shifting token number (2.15: 5) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (2.15: 5) -> $$ = nterm exp (2.15: 5) Entering state 10 Reading a token Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (2.14: ) $2 = nterm exp (2.15: 5) -> $$ = nterm exp (2.14-15: -5) Entering state 27 Next token is token '\n' (2.16-3.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (2.1-10: -5) $2 = token '=' (2.12: ) $3 = nterm exp (2.14-15: -5) -> $$ = nterm exp (2.1-15: -5) Entering state 8 Next token is token '\n' (2.16-3.0: ) Shifting token '\n' (2.16-3.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (2.1-15: -5) $2 = token '\n' (2.16-3.0: ) -> $$ = nterm line (2.1-3.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-2.0: ) $2 = nterm line (2.1-3.0: ) -> $$ = nterm input (1.1-3.0: ) Entering state 6 Reading a token Next token is token '\n' (3.1-4.0: ) Shifting token '\n' (3.1-4.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (3.1-4.0: ) -> $$ = nterm line (3.1-4.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-3.0: ) $2 = nterm line (3.1-4.0: ) -> $$ = nterm input (1.1-4.0: ) Entering state 6 Reading a token Next token is token '-' (4.1: ) Shifting token '-' (4.1: ) Entering state 2 Reading a token Next token is token number (4.2: 1) Shifting token number (4.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4.2: 1) -> $$ = nterm exp (4.2: 1) Entering state 10 Reading a token Next token is token '^' (4.3: ) Shifting token '^' (4.3: ) Entering state 23 Reading a token Next token is token number (4.4: 2) Shifting token number (4.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4.4: 2) -> $$ = nterm exp (4.4: 2) Entering state 32 Reading a token Next token is token '=' (4.6: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (4.2: 1) $2 = token '^' (4.3: ) $3 = nterm exp (4.4: 2) -> $$ = nterm exp (4.2-4: 1) Entering state 10 Next token is token '=' (4.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.1: ) $2 = nterm exp (4.2-4: 1) -> $$ = nterm exp (4.1-4: -1) Entering state 8 Next token is token '=' (4.6: ) Shifting token '=' (4.6: ) Entering state 18 Reading a token Next token is token '-' (4.8: ) Shifting token '-' (4.8: ) Entering state 2 Reading a token Next token is token number (4.9: 1) Shifting token number (4.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (4.9: 1) -> $$ = nterm exp (4.9: 1) Entering state 10 Reading a token Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (4.8: ) $2 = nterm exp (4.9: 1) -> $$ = nterm exp (4.8-9: -1) Entering state 27 Next token is token '\n' (4.10-5.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (4.1-4: -1) $2 = token '=' (4.6: ) $3 = nterm exp (4.8-9: -1) -> $$ = nterm exp (4.1-9: -1) Entering state 8 Next token is token '\n' (4.10-5.0: ) Shifting token '\n' (4.10-5.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (4.1-9: -1) $2 = token '\n' (4.10-5.0: ) -> $$ = nterm line (4.1-5.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-4.0: ) $2 = nterm line (4.1-5.0: ) -> $$ = nterm input (1.1-5.0: ) Entering state 6 Reading a token Next token is token '(' (5.1: ) Shifting token '(' (5.1: ) Entering state 4 Reading a token Next token is token '-' (5.2: ) Shifting token '-' (5.2: ) Entering state 2 Reading a token Next token is token number (5.3: 1) Shifting token number (5.3: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5.3: 1) -> $$ = nterm exp (5.3: 1) Entering state 10 Reading a token Next token is token ')' (5.4: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (5.2: ) $2 = nterm exp (5.3: 1) -> $$ = nterm exp (5.2-3: -1) Entering state 12 Next token is token ')' (5.4: ) Shifting token ')' (5.4: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (5.1: ) $2 = nterm exp (5.2-3: -1) $3 = token ')' (5.4: ) -> $$ = nterm exp (5.1-4: -1) Entering state 8 Reading a token Next token is token '^' (5.5: ) Shifting token '^' (5.5: ) Entering state 23 Reading a token Next token is token number (5.6: 2) Shifting token number (5.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5.6: 2) -> $$ = nterm exp (5.6: 2) Entering state 32 Reading a token Next token is token '=' (5.8: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (5.1-4: -1) $2 = token '^' (5.5: ) $3 = nterm exp (5.6: 2) -> $$ = nterm exp (5.1-6: 1) Entering state 8 Next token is token '=' (5.8: ) Shifting token '=' (5.8: ) Entering state 18 Reading a token Next token is token number (5.10: 1) Shifting token number (5.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (5.10: 1) -> $$ = nterm exp (5.10: 1) Entering state 27 Reading a token Next token is token '\n' (5.11-6.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (5.1-6: 1) $2 = token '=' (5.8: ) $3 = nterm exp (5.10: 1) -> $$ = nterm exp (5.1-10: 1) Entering state 8 Next token is token '\n' (5.11-6.0: ) Shifting token '\n' (5.11-6.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (5.1-10: 1) $2 = token '\n' (5.11-6.0: ) -> $$ = nterm line (5.1-6.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-5.0: ) $2 = nterm line (5.1-6.0: ) -> $$ = nterm input (1.1-6.0: ) Entering state 6 Reading a token Next token is token '\n' (6.1-7.0: ) Shifting token '\n' (6.1-7.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (6.1-7.0: ) -> $$ = nterm line (6.1-7.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-6.0: ) $2 = nterm line (6.1-7.0: ) -> $$ = nterm input (1.1-7.0: ) Entering state 6 Reading a token Next token is token '-' (7.1: ) Shifting token '-' (7.1: ) Entering state 2 Reading a token Next token is token '-' (7.2: ) Shifting token '-' (7.2: ) Entering state 2 Reading a token Next token is token '-' (7.3: ) Shifting token '-' (7.3: ) Entering state 2 Reading a token Next token is token number (7.4: 1) Shifting token number (7.4: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (7.4: 1) -> $$ = nterm exp (7.4: 1) Entering state 10 Reading a token Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.3: ) $2 = nterm exp (7.4: 1) -> $$ = nterm exp (7.3-4: -1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.2: ) $2 = nterm exp (7.3-4: -1) -> $$ = nterm exp (7.2-4: 1) Entering state 10 Next token is token '=' (7.6: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.1: ) $2 = nterm exp (7.2-4: 1) -> $$ = nterm exp (7.1-4: -1) Entering state 8 Next token is token '=' (7.6: ) Shifting token '=' (7.6: ) Entering state 18 Reading a token Next token is token '-' (7.8: ) Shifting token '-' (7.8: ) Entering state 2 Reading a token Next token is token number (7.9: 1) Shifting token number (7.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (7.9: 1) -> $$ = nterm exp (7.9: 1) Entering state 10 Reading a token Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (7.8: ) $2 = nterm exp (7.9: 1) -> $$ = nterm exp (7.8-9: -1) Entering state 27 Next token is token '\n' (7.10-8.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (7.1-4: -1) $2 = token '=' (7.6: ) $3 = nterm exp (7.8-9: -1) -> $$ = nterm exp (7.1-9: -1) Entering state 8 Next token is token '\n' (7.10-8.0: ) Shifting token '\n' (7.10-8.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (7.1-9: -1) $2 = token '\n' (7.10-8.0: ) -> $$ = nterm line (7.1-8.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-7.0: ) $2 = nterm line (7.1-8.0: ) -> $$ = nterm input (1.1-8.0: ) Entering state 6 Reading a token Next token is token '\n' (8.1-9.0: ) Shifting token '\n' (8.1-9.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (8.1-9.0: ) -> $$ = nterm line (8.1-9.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-8.0: ) $2 = nterm line (8.1-9.0: ) -> $$ = nterm input (1.1-9.0: ) Entering state 6 Reading a token Next token is token number (9.1: 1) Shifting token number (9.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.1: 1) -> $$ = nterm exp (9.1: 1) Entering state 8 Reading a token Next token is token '-' (9.3: ) Shifting token '-' (9.3: ) Entering state 19 Reading a token Next token is token number (9.5: 2) Shifting token number (9.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.5: 2) -> $$ = nterm exp (9.5: 2) Entering state 28 Reading a token Next token is token '-' (9.7: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1: 1) $2 = token '-' (9.3: ) $3 = nterm exp (9.5: 2) -> $$ = nterm exp (9.1-5: -1) Entering state 8 Next token is token '-' (9.7: ) Shifting token '-' (9.7: ) Entering state 19 Reading a token Next token is token number (9.9: 3) Shifting token number (9.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.9: 3) -> $$ = nterm exp (9.9: 3) Entering state 28 Reading a token Next token is token '=' (9.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (9.1-5: -1) $2 = token '-' (9.7: ) $3 = nterm exp (9.9: 3) -> $$ = nterm exp (9.1-9: -4) Entering state 8 Next token is token '=' (9.11: ) Shifting token '=' (9.11: ) Entering state 18 Reading a token Next token is token '-' (9.13: ) Shifting token '-' (9.13: ) Entering state 2 Reading a token Next token is token number (9.14: 4) Shifting token number (9.14: 4) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (9.14: 4) -> $$ = nterm exp (9.14: 4) Entering state 10 Reading a token Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 11 (line 102): $1 = token '-' (9.13: ) $2 = nterm exp (9.14: 4) -> $$ = nterm exp (9.13-14: -4) Entering state 27 Next token is token '\n' (9.15-10.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (9.1-9: -4) $2 = token '=' (9.11: ) $3 = nterm exp (9.13-14: -4) -> $$ = nterm exp (9.1-14: -4) Entering state 8 Next token is token '\n' (9.15-10.0: ) Shifting token '\n' (9.15-10.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (9.1-14: -4) $2 = token '\n' (9.15-10.0: ) -> $$ = nterm line (9.1-10.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-9.0: ) $2 = nterm line (9.1-10.0: ) -> $$ = nterm input (1.1-10.0: ) Entering state 6 Reading a token Next token is token number (10.1: 1) Shifting token number (10.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.1: 1) -> $$ = nterm exp (10.1: 1) Entering state 8 Reading a token Next token is token '-' (10.3: ) Shifting token '-' (10.3: ) Entering state 19 Reading a token Next token is token '(' (10.5: ) Shifting token '(' (10.5: ) Entering state 4 Reading a token Next token is token number (10.6: 2) Shifting token number (10.6: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.6: 2) -> $$ = nterm exp (10.6: 2) Entering state 12 Reading a token Next token is token '-' (10.8: ) Shifting token '-' (10.8: ) Entering state 19 Reading a token Next token is token number (10.10: 3) Shifting token number (10.10: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.10: 3) -> $$ = nterm exp (10.10: 3) Entering state 28 Reading a token Next token is token ')' (10.11: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.6: 2) $2 = token '-' (10.8: ) $3 = nterm exp (10.10: 3) -> $$ = nterm exp (10.6-10: -1) Entering state 12 Next token is token ')' (10.11: ) Shifting token ')' (10.11: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (10.5: ) $2 = nterm exp (10.6-10: -1) $3 = token ')' (10.11: ) -> $$ = nterm exp (10.5-11: -1) Entering state 28 Reading a token Next token is token '=' (10.13: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (10.1: 1) $2 = token '-' (10.3: ) $3 = nterm exp (10.5-11: -1) -> $$ = nterm exp (10.1-11: 2) Entering state 8 Next token is token '=' (10.13: ) Shifting token '=' (10.13: ) Entering state 18 Reading a token Next token is token number (10.15: 2) Shifting token number (10.15: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (10.15: 2) -> $$ = nterm exp (10.15: 2) Entering state 27 Reading a token Next token is token '\n' (10.16-11.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (10.1-11: 2) $2 = token '=' (10.13: ) $3 = nterm exp (10.15: 2) -> $$ = nterm exp (10.1-15: 2) Entering state 8 Next token is token '\n' (10.16-11.0: ) Shifting token '\n' (10.16-11.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (10.1-15: 2) $2 = token '\n' (10.16-11.0: ) -> $$ = nterm line (10.1-11.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-10.0: ) $2 = nterm line (10.1-11.0: ) -> $$ = nterm input (1.1-11.0: ) Entering state 6 Reading a token Next token is token '\n' (11.1-12.0: ) Shifting token '\n' (11.1-12.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (11.1-12.0: ) -> $$ = nterm line (11.1-12.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-11.0: ) $2 = nterm line (11.1-12.0: ) -> $$ = nterm input (1.1-12.0: ) Entering state 6 Reading a token Next token is token number (12.1: 2) Shifting token number (12.1: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.1: 2) -> $$ = nterm exp (12.1: 2) Entering state 8 Reading a token Next token is token '^' (12.2: ) Shifting token '^' (12.2: ) Entering state 23 Reading a token Next token is token number (12.3: 2) Shifting token number (12.3: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.3: 2) -> $$ = nterm exp (12.3: 2) Entering state 32 Reading a token Next token is token '^' (12.4: ) Shifting token '^' (12.4: ) Entering state 23 Reading a token Next token is token number (12.5: 3) Shifting token number (12.5: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.5: 3) -> $$ = nterm exp (12.5: 3) Entering state 32 Reading a token Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.3: 2) $2 = token '^' (12.4: ) $3 = nterm exp (12.5: 3) -> $$ = nterm exp (12.3-5: 8) Entering state 32 Next token is token '=' (12.7: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (12.1: 2) $2 = token '^' (12.2: ) $3 = nterm exp (12.3-5: 8) -> $$ = nterm exp (12.1-5: 256) Entering state 8 Next token is token '=' (12.7: ) Shifting token '=' (12.7: ) Entering state 18 Reading a token Next token is token number (12.9-11: 256) Shifting token number (12.9-11: 256) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (12.9-11: 256) -> $$ = nterm exp (12.9-11: 256) Entering state 27 Reading a token Next token is token '\n' (12.12-13.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (12.1-5: 256) $2 = token '=' (12.7: ) $3 = nterm exp (12.9-11: 256) -> $$ = nterm exp (12.1-11: 256) Entering state 8 Next token is token '\n' (12.12-13.0: ) Shifting token '\n' (12.12-13.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (12.1-11: 256) $2 = token '\n' (12.12-13.0: ) -> $$ = nterm line (12.1-13.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-12.0: ) $2 = nterm line (12.1-13.0: ) -> $$ = nterm input (1.1-13.0: ) Entering state 6 Reading a token Next token is token '(' (13.1: ) Shifting token '(' (13.1: ) Entering state 4 Reading a token Next token is token number (13.2: 2) Shifting token number (13.2: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.2: 2) -> $$ = nterm exp (13.2: 2) Entering state 12 Reading a token Next token is token '^' (13.3: ) Shifting token '^' (13.3: ) Entering state 23 Reading a token Next token is token number (13.4: 2) Shifting token number (13.4: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.4: 2) -> $$ = nterm exp (13.4: 2) Entering state 32 Reading a token Next token is token ')' (13.5: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.2: 2) $2 = token '^' (13.3: ) $3 = nterm exp (13.4: 2) -> $$ = nterm exp (13.2-4: 4) Entering state 12 Next token is token ')' (13.5: ) Shifting token ')' (13.5: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (13.1: ) $2 = nterm exp (13.2-4: 4) $3 = token ')' (13.5: ) -> $$ = nterm exp (13.1-5: 4) Entering state 8 Reading a token Next token is token '^' (13.6: ) Shifting token '^' (13.6: ) Entering state 23 Reading a token Next token is token number (13.7: 3) Shifting token number (13.7: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.7: 3) -> $$ = nterm exp (13.7: 3) Entering state 32 Reading a token Next token is token '=' (13.9: ) Reducing stack 0 by rule 12 (line 103): $1 = nterm exp (13.1-5: 4) $2 = token '^' (13.6: ) $3 = nterm exp (13.7: 3) -> $$ = nterm exp (13.1-7: 64) Entering state 8 Next token is token '=' (13.9: ) Shifting token '=' (13.9: ) Entering state 18 Reading a token Next token is token number (13.11-12: 64) Shifting token number (13.11-12: 64) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (13.11-12: 64) -> $$ = nterm exp (13.11-12: 64) Entering state 27 Reading a token Next token is token '\n' (13.13-14.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (13.1-7: 64) $2 = token '=' (13.9: ) $3 = nterm exp (13.11-12: 64) -> $$ = nterm exp (13.1-12: 64) Entering state 8 Next token is token '\n' (13.13-14.0: ) Shifting token '\n' (13.13-14.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (13.1-12: 64) $2 = token '\n' (13.13-14.0: ) -> $$ = nterm line (13.1-14.0: ) Entering state 17 Reducing stack 0 by rule 2 (line 70): $1 = nterm input (1.1-13.0: ) $2 = nterm line (13.1-14.0: ) -> $$ = nterm input (1.1-14.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (14.1: ) Entering state 16 Cleanup: popping token end of input (14.1: ) Cleanup: popping nterm input (1.1-14.0: ) input: | 1 2 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token number (1.3: 2) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token number (1.3: 2) 1.3: syntax error, unexpected number Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token number (1.3: 2) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | 1//2 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '/' (1.2: ) Shifting token '/' (1.2: ) Entering state 22 Reading a token Next token is token '/' (1.3: ) 1.3: syntax error, unexpected '/', expecting number or '-' or '(' or '!' Error: popping token '/' (1.2: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '/' (1.3: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | error ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token invalid token (1.1: ) 1.1: syntax error, unexpected invalid token Cleanup: discarding lookahead token invalid token (1.1: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | 1 = 2 = 3 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '=' (1.3: ) Shifting token '=' (1.3: ) Entering state 18 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 27 Reading a token Next token is token '=' (1.7: ) 1.7: syntax error, unexpected '=' Error: popping nterm exp (1.5: 2) Error: popping token '=' (1.3: ) Error: popping nterm exp (1.1: 1) Cleanup: discarding lookahead token '=' (1.7: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | | +1 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '\n' (1.1-2.0: ) Shifting token '\n' (1.1-2.0: ) Entering state 3 Reducing stack 0 by rule 3 (line 74): $1 = token '\n' (1.1-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Next token is token '+' (2.1: ) 2.1: syntax error, unexpected '+' Error: popping nterm input (1.1-2.0: ) Cleanup: discarding lookahead token '+' (2.1: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr ./calc.at:1494: $PREPARSER ./calc /dev/null stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token end of input (1.1: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Now at end of input. 1.1: syntax error, unexpected end of input Cleanup: discarding lookahead token end of input (1.1: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | () + (1 + 1 + 1 +) + (* * *) + (1 * 2 * *) = 1 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token ')' (1.2: ) 1.2: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token ')' (1.2: ) Shifting token ')' (1.2: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.2: ) -> $$ = nterm exp (1.1-2: 1111) Entering state 8 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token '(' (1.6: ) Shifting token '(' (1.6: ) Entering state 4 Reading a token Next token is token number (1.7: 1) Shifting token number (1.7: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.7: 1) -> $$ = nterm exp (1.7: 1) Entering state 12 Reading a token Next token is token '+' (1.9: ) Shifting token '+' (1.9: ) Entering state 20 Reading a token Next token is token number (1.11: 1) Shifting token number (1.11: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.11: 1) -> $$ = nterm exp (1.11: 1) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7: 1) $2 = token '+' (1.9: ) $3 = nterm exp (1.11: 1) -> $$ = nterm exp (1.7-11: 2) Entering state 12 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token number (1.15: 1) Shifting token number (1.15: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.15: 1) -> $$ = nterm exp (1.15: 1) Entering state 29 Reading a token Next token is token '+' (1.17: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.7-11: 2) $2 = token '+' (1.13: ) $3 = nterm exp (1.15: 1) -> $$ = nterm exp (1.7-15: 3) Entering state 12 Next token is token '+' (1.17: ) Shifting token '+' (1.17: ) Entering state 20 Reading a token Next token is token ')' (1.18: ) 1.18: syntax error, unexpected ')', expecting number or '-' or '(' or '!' Error: popping token '+' (1.17: ) Error: popping nterm exp (1.7-15: 3) Shifting token error (1.7-18: ) Entering state 11 Next token is token ')' (1.18: ) Shifting token ')' (1.18: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.6: ) $2 = token error (1.7-18: ) $3 = token ')' (1.18: ) -> $$ = nterm exp (1.6-18: 1111) Entering state 29 Reading a token Next token is token '+' (1.20: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-2: 1111) $2 = token '+' (1.4: ) $3 = nterm exp (1.6-18: 1111) -> $$ = nterm exp (1.1-18: 2222) Entering state 8 Next token is token '+' (1.20: ) Shifting token '+' (1.20: ) Entering state 20 Reading a token Next token is token '(' (1.22: ) Shifting token '(' (1.22: ) Entering state 4 Reading a token Next token is token '*' (1.23: ) 1.23: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.23: ) Entering state 11 Next token is token '*' (1.23: ) Error: discarding token '*' (1.23: ) Reading a token Next token is token '*' (1.25: ) Error: discarding token '*' (1.25: ) Reading a token Next token is token '*' (1.27: ) Error: discarding token '*' (1.27: ) Reading a token Next token is token ')' (1.28: ) Entering state 11 Next token is token ')' (1.28: ) Shifting token ')' (1.28: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.22: ) $2 = token error (1.23-27: ) $3 = token ')' (1.28: ) -> $$ = nterm exp (1.22-28: 1111) Entering state 29 Reading a token Next token is token '+' (1.30: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-18: 2222) $2 = token '+' (1.20: ) $3 = nterm exp (1.22-28: 1111) -> $$ = nterm exp (1.1-28: 3333) Entering state 8 Next token is token '+' (1.30: ) Shifting token '+' (1.30: ) Entering state 20 Reading a token Next token is token '(' (1.32: ) Shifting token '(' (1.32: ) Entering state 4 Reading a token Next token is token number (1.33: 1) Shifting token number (1.33: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.33: 1) -> $$ = nterm exp (1.33: 1) Entering state 12 Reading a token Next token is token '*' (1.35: ) Shifting token '*' (1.35: ) Entering state 21 Reading a token Next token is token number (1.37: 2) Shifting token number (1.37: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.37: 2) -> $$ = nterm exp (1.37: 2) Entering state 30 Reading a token Next token is token '*' (1.39: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.33: 1) $2 = token '*' (1.35: ) $3 = nterm exp (1.37: 2) -> $$ = nterm exp (1.33-37: 2) Entering state 12 Next token is token '*' (1.39: ) Shifting token '*' (1.39: ) Entering state 21 Reading a token Next token is token '*' (1.41: ) 1.41: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Error: popping token '*' (1.39: ) Error: popping nterm exp (1.33-37: 2) Shifting token error (1.33-41: ) Entering state 11 Next token is token '*' (1.41: ) Error: discarding token '*' (1.41: ) Reading a token Next token is token ')' (1.42: ) Entering state 11 Next token is token ')' (1.42: ) Shifting token ')' (1.42: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.32: ) $2 = token error (1.33-41: ) $3 = token ')' (1.42: ) -> $$ = nterm exp (1.32-42: 1111) Entering state 29 Reading a token Next token is token '=' (1.44: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-28: 3333) $2 = token '+' (1.30: ) $3 = nterm exp (1.32-42: 1111) -> $$ = nterm exp (1.1-42: 4444) Entering state 8 Next token is token '=' (1.44: ) Shifting token '=' (1.44: ) Entering state 18 Reading a token Next token is token number (1.46: 1) Shifting token number (1.46: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.46: 1) -> $$ = nterm exp (1.46: 1) Entering state 27 Reading a token Next token is token '\n' (1.47-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-42: 4444) $2 = token '=' (1.44: ) $3 = nterm exp (1.46: 1) 1.1-46: error: 4444 != 1 -> $$ = nterm exp (1.1-46: 4444) Entering state 8 Next token is token '\n' (1.47-2.0: ) Shifting token '\n' (1.47-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-46: 4444) $2 = token '\n' (1.47-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | (!!) + (1 2) = 1 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '!' (1.2: ) Shifting token '!' (1.2: ) Entering state 5 Reading a token Next token is token '!' (1.3: ) Shifting token '!' (1.3: ) Entering state 15 Reducing stack 0 by rule 16 (line 107): $1 = token '!' (1.2: ) $2 = token '!' (1.3: ) Shifting token error (1.2-3: ) Entering state 11 Reading a token Next token is token ')' (1.4: ) Shifting token ')' (1.4: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-3: ) $3 = token ')' (1.4: ) -> $$ = nterm exp (1.1-4: 1111) Entering state 8 Reading a token Next token is token '+' (1.6: ) Shifting token '+' (1.6: ) Entering state 20 Reading a token Next token is token '(' (1.8: ) Shifting token '(' (1.8: ) Entering state 4 Reading a token Next token is token number (1.9: 1) Shifting token number (1.9: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 1) -> $$ = nterm exp (1.9: 1) Entering state 12 Reading a token Next token is token number (1.11: 2) 1.11: syntax error, unexpected number Error: popping nterm exp (1.9: 1) Shifting token error (1.9-11: ) Entering state 11 Next token is token number (1.11: 2) Error: discarding token number (1.11: 2) Reading a token Next token is token ')' (1.12: ) Entering state 11 Next token is token ')' (1.12: ) Shifting token ')' (1.12: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.8: ) $2 = token error (1.9-11: ) $3 = token ')' (1.12: ) -> $$ = nterm exp (1.8-12: 1111) Entering state 29 Reading a token Next token is token '=' (1.14: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-4: 1111) $2 = token '+' (1.6: ) $3 = nterm exp (1.8-12: 1111) -> $$ = nterm exp (1.1-12: 2222) Entering state 8 Next token is token '=' (1.14: ) Shifting token '=' (1.14: ) Entering state 18 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-12: 2222) $2 = token '=' (1.14: ) $3 = nterm exp (1.16: 1) 1.1-16: error: 2222 != 1 -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | (- *) + (1 2) = 1 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '-' (1.2: ) Shifting token '-' (1.2: ) Entering state 2 Reading a token Next token is token '*' (1.4: ) 1.4: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.4: ) Entering state 9 Reducing stack 0 by rule 15 (line 106): $1 = token '-' (1.2: ) $2 = token error (1.4: ) Shifting token error (1.2-4: ) Entering state 11 Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token number (1.10: 1) Shifting token number (1.10: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.10: 1) -> $$ = nterm exp (1.10: 1) Entering state 12 Reading a token Next token is token number (1.12: 2) 1.12: syntax error, unexpected number Error: popping nterm exp (1.10: 1) Shifting token error (1.10-12: ) Entering state 11 Next token is token number (1.12: 2) Error: discarding token number (1.12: 2) Reading a token Next token is token ')' (1.13: ) Entering state 11 Next token is token ')' (1.13: ) Shifting token ')' (1.13: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10-12: ) $3 = token ')' (1.13: ) -> $$ = nterm exp (1.9-13: 1111) Entering state 29 Reading a token Next token is token '=' (1.15: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-13: 1111) -> $$ = nterm exp (1.1-13: 2222) Entering state 8 Next token is token '=' (1.15: ) Shifting token '=' (1.15: ) Entering state 18 Reading a token Next token is token number (1.17: 1) Shifting token number (1.17: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.17: 1) -> $$ = nterm exp (1.17: 1) Entering state 27 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-13: 2222) $2 = token '=' (1.15: ) $3 = nterm exp (1.17: 1) 1.1-17: error: 2222 != 1 -> $$ = nterm exp (1.1-17: 2222) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2222) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | (* *) + (*) + (*) ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token '*' (1.2: ) 1.2: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.2: ) Entering state 11 Next token is token '*' (1.2: ) Error: discarding token '*' (1.2: ) Reading a token Next token is token '*' (1.4: ) Error: discarding token '*' (1.4: ) Reading a token Next token is token ')' (1.5: ) Entering state 11 Next token is token ')' (1.5: ) Shifting token ')' (1.5: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-4: ) $3 = token ')' (1.5: ) -> $$ = nterm exp (1.1-5: 1111) Entering state 8 Reading a token Next token is token '+' (1.7: ) Shifting token '+' (1.7: ) Entering state 20 Reading a token Next token is token '(' (1.9: ) Shifting token '(' (1.9: ) Entering state 4 Reading a token Next token is token '*' (1.10: ) 1.10: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.10: ) Entering state 11 Next token is token '*' (1.10: ) Error: discarding token '*' (1.10: ) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.9: ) $2 = token error (1.10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.9-11: 1111) Entering state 29 Reading a token Next token is token '+' (1.13: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-5: 1111) $2 = token '+' (1.7: ) $3 = nterm exp (1.9-11: 1111) -> $$ = nterm exp (1.1-11: 2222) Entering state 8 Next token is token '+' (1.13: ) Shifting token '+' (1.13: ) Entering state 20 Reading a token Next token is token '(' (1.15: ) Shifting token '(' (1.15: ) Entering state 4 Reading a token Next token is token '*' (1.16: ) 1.16: syntax error, unexpected '*', expecting number or '-' or '(' or '!' Shifting token error (1.16: ) Entering state 11 Next token is token '*' (1.16: ) Error: discarding token '*' (1.16: ) Reading a token Next token is token ')' (1.17: ) Entering state 11 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.15: ) $2 = token error (1.16: ) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.15-17: 1111) Entering state 29 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-11: 2222) $2 = token '+' (1.13: ) $3 = nterm exp (1.15-17: 1111) -> $$ = nterm exp (1.1-17: 3333) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 3333) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | 1 + 2 * 3 + !+ ++ ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '+' (1.14: ) Shifting token '+' (1.14: ) Entering state 14 Reducing stack 0 by rule 17 (line 108): $1 = token '!' (1.13: ) $2 = token '+' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) input: | 1 + 2 * 3 + !- ++ ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token number (1.1: 1) Shifting token number (1.1: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.1: 1) -> $$ = nterm exp (1.1: 1) Entering state 8 Reading a token Next token is token '+' (1.3: ) Shifting token '+' (1.3: ) Entering state 20 Reading a token Next token is token number (1.5: 2) Shifting token number (1.5: 2) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.5: 2) -> $$ = nterm exp (1.5: 2) Entering state 29 Reading a token Next token is token '*' (1.7: ) Shifting token '*' (1.7: ) Entering state 21 Reading a token Next token is token number (1.9: 3) Shifting token number (1.9: 3) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.9: 3) -> $$ = nterm exp (1.9: 3) Entering state 30 Reading a token Next token is token '+' (1.11: ) Reducing stack 0 by rule 9 (line 92): $1 = nterm exp (1.5: 2) $2 = token '*' (1.7: ) $3 = nterm exp (1.9: 3) -> $$ = nterm exp (1.5-9: 6) Entering state 29 Next token is token '+' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1: 1) $2 = token '+' (1.3: ) $3 = nterm exp (1.5-9: 6) -> $$ = nterm exp (1.1-9: 7) Entering state 8 Next token is token '+' (1.11: ) Shifting token '+' (1.11: ) Entering state 20 Reading a token Next token is token '!' (1.13: ) Shifting token '!' (1.13: ) Entering state 5 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 13 Reducing stack 0 by rule 18 (line 109): $1 = token '!' (1.13: ) $2 = token '-' (1.14: ) Cleanup: popping token '+' (1.11: ) Cleanup: popping nterm exp (1.1-9: 7) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | (#) + (#) = 2222 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token ')' (1.3: ) Entering state 11 Next token is token ')' (1.3: ) Shifting token ')' (1.3: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2: ) $3 = token ')' (1.3: ) -> $$ = nterm exp (1.1-3: 1111) Entering state 8 Reading a token Next token is token '+' (1.5: ) Shifting token '+' (1.5: ) Entering state 20 Reading a token Next token is token '(' (1.7: ) Shifting token '(' (1.7: ) Entering state 4 Reading a token 1.8: syntax error: invalid character: '#' Next token is token error (1.8: ) Shifting token error (1.8: ) Entering state 11 Next token is token error (1.8: ) Error: discarding token error (1.8: ) Reading a token Next token is token ')' (1.9: ) Entering state 11 Next token is token ')' (1.9: ) Shifting token ')' (1.9: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.7: ) $2 = token error (1.8: ) $3 = token ')' (1.9: ) -> $$ = nterm exp (1.7-9: 1111) Entering state 29 Reading a token Next token is token '=' (1.11: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.1-3: 1111) $2 = token '+' (1.5: ) $3 = nterm exp (1.7-9: 1111) -> $$ = nterm exp (1.1-9: 2222) Entering state 8 Next token is token '=' (1.11: ) Shifting token '=' (1.11: ) Entering state 18 Reading a token Next token is token number (1.13-16: 2222) Shifting token number (1.13-16: 2222) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.13-16: 2222) -> $$ = nterm exp (1.13-16: 2222) Entering state 27 Reading a token Next token is token '\n' (1.17-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-9: 2222) $2 = token '=' (1.11: ) $3 = nterm exp (1.13-16: 2222) -> $$ = nterm exp (1.1-16: 2222) Entering state 8 Next token is token '\n' (1.17-2.0: ) Shifting token '\n' (1.17-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-16: 2222) $2 = token '\n' (1.17-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | (1 + #) = 1111 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | (# + 1) = 1111 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token 1.2: syntax error: invalid character: '#' Next token is token error (1.2: ) Shifting token error (1.2: ) Entering state 11 Next token is token error (1.2: ) Error: discarding token error (1.2: ) Reading a token Next token is token '+' (1.4: ) Error: discarding token '+' (1.4: ) Reading a token Next token is token number (1.6: 1) Error: discarding token number (1.6: 1) Reading a token Next token is token ')' (1.7: ) Entering state 11 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-6: ) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 1111) Entering state 8 Reading a token Next token is token '=' (1.9: ) Shifting token '=' (1.9: ) Entering state 18 Reading a token Next token is token number (1.11-14: 1111) Shifting token number (1.11-14: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.11-14: 1111) -> $$ = nterm exp (1.11-14: 1111) Entering state 27 Reading a token Next token is token '\n' (1.15-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-7: 1111) $2 = token '=' (1.9: ) $3 = nterm exp (1.11-14: 1111) -> $$ = nterm exp (1.1-14: 1111) Entering state 8 Next token is token '\n' (1.15-2.0: ) Shifting token '\n' (1.15-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-14: 1111) $2 = token '\n' (1.15-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | (1 + # + 1) = 1111 ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token 1.6: syntax error: invalid character: '#' Next token is token error (1.6: ) Error: popping token '+' (1.4: ) Error: popping nterm exp (1.2: 1) Shifting token error (1.2-6: ) Entering state 11 Next token is token error (1.6: ) Error: discarding token error (1.6: ) Reading a token Next token is token '+' (1.8: ) Error: discarding token '+' (1.8: ) Reading a token Next token is token number (1.10: 1) Error: discarding token number (1.10: 1) Reading a token Next token is token ')' (1.11: ) Entering state 11 Next token is token ')' (1.11: ) Shifting token ')' (1.11: ) Entering state 25 Reducing stack 0 by rule 14 (line 105): $1 = token '(' (1.1: ) $2 = token error (1.2-10: ) $3 = token ')' (1.11: ) -> $$ = nterm exp (1.1-11: 1111) Entering state 8 Reading a token Next token is token '=' (1.13: ) Shifting token '=' (1.13: ) Entering state 18 Reading a token Next token is token number (1.15-18: 1111) Shifting token number (1.15-18: 1111) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.15-18: 1111) -> $$ = nterm exp (1.15-18: 1111) Entering state 27 Reading a token Next token is token '\n' (1.19-2.0: ) Reducing stack 0 by rule 6 (line 80): $1 = nterm exp (1.1-11: 1111) $2 = token '=' (1.13: ) $3 = nterm exp (1.15-18: 1111) -> $$ = nterm exp (1.1-18: 1111) Entering state 8 Next token is token '\n' (1.19-2.0: ) Shifting token '\n' (1.19-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-18: 1111) $2 = token '\n' (1.19-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr input: | (1 + 1) / (1 - 1) ./calc.at:1494: $PREPARSER ./calc input stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reading a token Next token is token '(' (1.1: ) Shifting token '(' (1.1: ) Entering state 4 Reading a token Next token is token number (1.2: 1) Shifting token number (1.2: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.2: 1) -> $$ = nterm exp (1.2: 1) Entering state 12 Reading a token Next token is token '+' (1.4: ) Shifting token '+' (1.4: ) Entering state 20 Reading a token Next token is token number (1.6: 1) Shifting token number (1.6: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.6: 1) -> $$ = nterm exp (1.6: 1) Entering state 29 Reading a token Next token is token ')' (1.7: ) Reducing stack 0 by rule 7 (line 90): $1 = nterm exp (1.2: 1) $2 = token '+' (1.4: ) $3 = nterm exp (1.6: 1) -> $$ = nterm exp (1.2-6: 2) Entering state 12 Next token is token ')' (1.7: ) Shifting token ')' (1.7: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.1: ) $2 = nterm exp (1.2-6: 2) $3 = token ')' (1.7: ) -> $$ = nterm exp (1.1-7: 2) Entering state 8 Reading a token Next token is token '/' (1.9: ) Shifting token '/' (1.9: ) Entering state 22 Reading a token Next token is token '(' (1.11: ) Shifting token '(' (1.11: ) Entering state 4 Reading a token Next token is token number (1.12: 1) Shifting token number (1.12: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.12: 1) -> $$ = nterm exp (1.12: 1) Entering state 12 Reading a token Next token is token '-' (1.14: ) Shifting token '-' (1.14: ) Entering state 19 Reading a token Next token is token number (1.16: 1) Shifting token number (1.16: 1) Entering state 1 Reducing stack 0 by rule 5 (line 79): $1 = token number (1.16: 1) -> $$ = nterm exp (1.16: 1) Entering state 28 Reading a token Next token is token ')' (1.17: ) Reducing stack 0 by rule 8 (line 91): $1 = nterm exp (1.12: 1) $2 = token '-' (1.14: ) $3 = nterm exp (1.16: 1) -> $$ = nterm exp (1.12-16: 0) Entering state 12 Next token is token ')' (1.17: ) Shifting token ')' (1.17: ) Entering state 26 Reducing stack 0 by rule 13 (line 104): $1 = token '(' (1.11: ) $2 = nterm exp (1.12-16: 0) $3 = token ')' (1.17: ) -> $$ = nterm exp (1.11-17: 0) Entering state 31 Reading a token Next token is token '\n' (1.18-2.0: ) Reducing stack 0 by rule 10 (line 93): $1 = nterm exp (1.1-7: 2) $2 = token '/' (1.9: ) $3 = nterm exp (1.11-17: 0) 1.11-17: error: null divisor -> $$ = nterm exp (1.1-17: 2) Entering state 8 Next token is token '\n' (1.18-2.0: ) Shifting token '\n' (1.18-2.0: ) Entering state 24 Reducing stack 0 by rule 4 (line 75): $1 = nterm exp (1.1-17: 2) $2 = token '\n' (1.18-2.0: ) -> $$ = nterm line (1.1-2.0: ) Entering state 7 Reducing stack 0 by rule 1 (line 69): $1 = nterm line (1.1-2.0: ) -> $$ = nterm input (1.1-2.0: ) Entering state 6 Reading a token Now at end of input. Shifting token end of input (2.1: ) Entering state 16 Cleanup: popping token end of input (2.1: ) Cleanup: popping nterm input (1.1-2.0: ) ./calc.at:1494: "$PERL" -pi -e 'use strict; s{syntax error on token \[(.*?)\] \(expected: (.*)\)} { my $unexp = $1; my @exps = $2 =~ /\[(.*?)\]/g; ($#exps && $#exps < 4) ? "syntax error, unexpected $unexp, expecting @{[join(\" or \", @exps)]}" : "syntax error, unexpected $unexp"; }eg ' expout || exit 77 ./calc.at:1494: cat stderr 570. calc.at:1494: ok 606. torture.at:270: testing State number type: 128 states ... ./torture.at:270: ruby $abs_top_srcdir/tests/linear 128 >input.y || exit 77 --- /dev/null 2023-05-24 22:25:30.000000000 -1200 +++ /build/bison-3.8.2+dfsg/tests/testsuite.dir/at-groups/606/stderr 2023-05-27 03:46:09.418272015 -1200 @@ -0,0 +1 @@ +/build/bison-3.8.2+dfsg/tests/testsuite.dir/at-groups/606/test-source: line 14: ruby: command not found 606. torture.at:270: skipped (torture.at:270) 607. torture.at:271: testing State number type: 129 states ... ./torture.at:271: ruby $abs_top_srcdir/tests/linear 129 >input.y || exit 77 --- /dev/null 2023-05-24 22:25:30.000000000 -1200 +++ /build/bison-3.8.2+dfsg/tests/testsuite.dir/at-groups/607/stderr 2023-05-27 03:46:10.782247180 -1200 @@ -0,0 +1 @@ +/build/bison-3.8.2+dfsg/tests/testsuite.dir/at-groups/607/test-source: line 14: ruby: command not found 607. torture.at:271: skipped (torture.at:271) 608. torture.at:272: testing State number type: 256 states ... ./torture.at:272: ruby $abs_top_srcdir/tests/linear 256 >input.y || exit 77 --- /dev/null 2023-05-24 22:25:30.000000000 -1200 +++ /build/bison-3.8.2+dfsg/tests/testsuite.dir/at-groups/608/stderr 2023-05-27 03:46:12.046224165 -1200 @@ -0,0 +1 @@ +/build/bison-3.8.2+dfsg/tests/testsuite.dir/at-groups/608/test-source: line 14: ruby: command not found 608. torture.at:272: skipped (torture.at:272) 609. torture.at:273: testing State number type: 257 states ... ./torture.at:273: ruby $abs_top_srcdir/tests/linear 257 >input.y || exit 77 --- /dev/null 2023-05-24 22:25:30.000000000 -1200 +++ /build/bison-3.8.2+dfsg/tests/testsuite.dir/at-groups/609/stderr 2023-05-27 03:46:13.270201878 -1200 @@ -0,0 +1 @@ +/build/bison-3.8.2+dfsg/tests/testsuite.dir/at-groups/609/test-source: line 14: ruby: command not found 609. torture.at:273: skipped (torture.at:273) 610. torture.at:274: testing State number type: 32768 states ... ./torture.at:274: ruby $abs_top_srcdir/tests/linear 32768 >input.y || exit 77 --- /dev/null 2023-05-24 22:25:30.000000000 -1200 +++ /build/bison-3.8.2+dfsg/tests/testsuite.dir/at-groups/610/stderr 2023-05-27 03:46:14.618177334 -1200 @@ -0,0 +1 @@ +/build/bison-3.8.2+dfsg/tests/testsuite.dir/at-groups/610/test-source: line 14: ruby: command not found 610. torture.at:274: skipped (torture.at:274) ./torture.at:140: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS 611. torture.at:275: testing State number type: 65536 states ... ./torture.at:275: ruby $abs_top_srcdir/tests/linear 65536 >input.y || exit 77 --- /dev/null 2023-05-24 22:25:30.000000000 -1200 +++ /build/bison-3.8.2+dfsg/tests/testsuite.dir/at-groups/611/stderr 2023-05-27 03:46:16.130149803 -1200 @@ -0,0 +1 @@ +/build/bison-3.8.2+dfsg/tests/testsuite.dir/at-groups/611/test-source: line 14: ruby: command not found 611. torture.at:275: skipped (torture.at:275) 612. torture.at:276: testing State number type: 65537 states ... ./torture.at:276: ruby $abs_top_srcdir/tests/linear 65537 >input.y || exit 77 --- /dev/null 2023-05-24 22:25:30.000000000 -1200 +++ /build/bison-3.8.2+dfsg/tests/testsuite.dir/at-groups/612/stderr 2023-05-27 03:46:17.638122345 -1200 @@ -0,0 +1 @@ +/build/bison-3.8.2+dfsg/tests/testsuite.dir/at-groups/612/test-source: line 14: ruby: command not found 612. torture.at:276: skipped (torture.at:276) 613. torture.at:385: testing Many lookahead tokens ... ./torture.at:387: "$PERL" -w ./gengram.pl 1000 || exit 77 stdout: %define parse.error verbose %debug %{ /* Adjust to the compiler. We used to do it here, but each time we add a new line, we have to adjust all the line numbers in error messages. It's simpler to use a constant include to a varying file. */ #include # include # include # include # define MAX 1000 static int yylex (void); #include /* !POSIX */ static void yyerror (const char *msg); %} %union { int val; }; %type input exp %token token %type n1 n2 n3 n4 n5 n6 n7 n8 n9 n10 n11 n12 n13 n14 n15 n16 n17 n18 n19 n20 n21 n22 n23 n24 n25 n26 n27 n28 n29 n30 n31 n32 n33 n34 n35 n36 n37 n38 n39 n40 n41 n42 n43 n44 n45 n46 n47 n48 n49 n50 n51 n52 n53 n54 n55 n56 n57 n58 n59 n60 n61 n62 n63 n64 n65 n66 n67 n68 n69 n70 n71 n72 n73 n74 n75 n76 n77 n78 n79 n80 n81 n82 n83 n84 n85 n86 n87 n88 n89 n90 n91 n92 n93 n94 n95 n96 n97 n98 n99 n100 n101 n102 n103 n104 n105 n106 n107 n108 n109 n110 n111 n112 n113 n114 n115 n116 n117 n118 n119 n120 n121 n122 n123 n124 n125 n126 n127 n128 n129 n130 n131 n132 n133 n134 n135 n136 n137 n138 n139 n140 n141 n142 n143 n144 n145 n146 n147 n148 n149 n150 n151 n152 n153 n154 n155 n156 n157 n158 n159 n160 n161 n162 n163 n164 n165 n166 n167 n168 n169 n170 n171 n172 n173 n174 n175 n176 n177 n178 n179 n180 n181 n182 n183 n184 n185 n186 n187 n188 n189 n190 n191 n192 n193 n194 n195 n196 n197 n198 n199 n200 n201 n202 n203 n204 n205 n206 n207 n208 n209 n210 n211 n212 n213 n214 n215 n216 n217 n218 n219 n220 n221 n222 n223 n224 n225 n226 n227 n228 n229 n230 n231 n232 n233 n234 n235 n236 n237 n238 n239 n240 n241 n242 n243 n244 n245 n246 n247 n248 n249 n250 n251 n252 n253 n254 n255 n256 n257 n258 n259 n260 n261 n262 n263 n264 n265 n266 n267 n268 n269 n270 n271 n272 n273 n274 n275 n276 n277 n278 n279 n280 n281 n282 n283 n284 n285 n286 n287 n288 n289 n290 n291 n292 n293 n294 n295 n296 n297 n298 n299 n300 n301 n302 n303 n304 n305 n306 n307 n308 n309 n310 n311 n312 n313 n314 n315 n316 n317 n318 n319 n320 n321 n322 n323 n324 n325 n326 n327 n328 n329 n330 n331 n332 n333 n334 n335 n336 n337 n338 n339 n340 n341 n342 n343 n344 n345 n346 n347 n348 n349 n350 n351 n352 n353 n354 n355 n356 n357 n358 n359 n360 n361 n362 n363 n364 n365 n366 n367 n368 n369 n370 n371 n372 n373 n374 n375 n376 n377 n378 n379 n380 n381 n382 n383 n384 n385 n386 n387 n388 n389 n390 n391 n392 n393 n394 n395 n396 n397 n398 n399 n400 n401 n402 n403 n404 n405 n406 n407 n408 n409 n410 n411 n412 n413 n414 n415 n416 n417 n418 n419 n420 n421 n422 n423 n424 n425 n426 n427 n428 n429 n430 n431 n432 n433 n434 n435 n436 n437 n438 n439 n440 n441 n442 n443 n444 n445 n446 n447 n448 n449 n450 n451 n452 n453 n454 n455 n456 n457 n458 n459 n460 n461 n462 n463 n464 n465 n466 n467 n468 n469 n470 n471 n472 n473 n474 n475 n476 n477 n478 n479 n480 n481 n482 n483 n484 n485 n486 n487 n488 n489 n490 n491 n492 n493 n494 n495 n496 n497 n498 n499 n500 n501 n502 n503 n504 n505 n506 n507 n508 n509 n510 n511 n512 n513 n514 n515 n516 n517 n518 n519 n520 n521 n522 n523 n524 n525 n526 n527 n528 n529 n530 n531 n532 n533 n534 n535 n536 n537 n538 n539 n540 n541 n542 n543 n544 n545 n546 n547 n548 n549 n550 n551 n552 n553 n554 n555 n556 n557 n558 n559 n560 n561 n562 n563 n564 n565 n566 n567 n568 n569 n570 n571 n572 n573 n574 n575 n576 n577 n578 n579 n580 n581 n582 n583 n584 n585 n586 n587 n588 n589 n590 n591 n592 n593 n594 n595 n596 n597 n598 n599 n600 n601 n602 n603 n604 n605 n606 n607 n608 n609 n610 n611 n612 n613 n614 n615 n616 n617 n618 n619 n620 n621 n622 n623 n624 n625 n626 n627 n628 n629 n630 n631 n632 n633 n634 n635 n636 n637 n638 n639 n640 n641 n642 n643 n644 n645 n646 n647 n648 n649 n650 n651 n652 n653 n654 n655 n656 n657 n658 n659 n660 n661 n662 n663 n664 n665 n666 n667 n668 n669 n670 n671 n672 n673 n674 n675 n676 n677 n678 n679 n680 n681 n682 n683 n684 n685 n686 n687 n688 n689 n690 n691 n692 n693 n694 n695 n696 n697 n698 n699 n700 n701 n702 n703 n704 n705 n706 n707 n708 n709 n710 n711 n712 n713 n714 n715 n716 n717 n718 n719 n720 n721 n722 n723 n724 n725 n726 n727 n728 n729 n730 n731 n732 n733 n734 n735 n736 n737 n738 n739 n740 n741 n742 n743 n744 n745 n746 n747 n748 n749 n750 n751 n752 n753 n754 n755 n756 n757 n758 n759 n760 n761 n762 n763 n764 n765 n766 n767 n768 n769 n770 n771 n772 n773 n774 n775 n776 n777 n778 n779 n780 n781 n782 n783 n784 n785 n786 n787 n788 n789 n790 n791 n792 n793 n794 n795 n796 n797 n798 n799 n800 n801 n802 n803 n804 n805 n806 n807 n808 n809 n810 n811 n812 n813 n814 n815 n816 n817 n818 n819 n820 n821 n822 n823 n824 n825 n826 n827 n828 n829 n830 n831 n832 n833 n834 n835 n836 n837 n838 n839 n840 n841 n842 n843 n844 n845 n846 n847 n848 n849 n850 n851 n852 n853 n854 n855 n856 n857 n858 n859 n860 n861 n862 n863 n864 n865 n866 n867 n868 n869 n870 n871 n872 n873 n874 n875 n876 n877 n878 n879 n880 n881 n882 n883 n884 n885 n886 n887 n888 n889 n890 n891 n892 n893 n894 n895 n896 n897 n898 n899 n900 n901 n902 n903 n904 n905 n906 n907 n908 n909 n910 n911 n912 n913 n914 n915 n916 n917 n918 n919 n920 n921 n922 n923 n924 n925 n926 n927 n928 n929 n930 n931 n932 n933 n934 n935 n936 n937 n938 n939 n940 n941 n942 n943 n944 n945 n946 n947 n948 n949 n950 n951 n952 n953 n954 n955 n956 n957 n958 n959 n960 n961 n962 n963 n964 n965 n966 n967 n968 n969 n970 n971 n972 n973 n974 n975 n976 n977 n978 n979 n980 n981 n982 n983 n984 n985 n986 n987 n988 n989 n990 n991 n992 n993 n994 n995 n996 n997 n998 n999 n1000 %token t1 1 "1" t2 2 "2" t3 3 "3" t4 4 "4" t5 5 "5" t6 6 "6" t7 7 "7" t8 8 "8" t9 9 "9" t10 10 "10" t11 11 "11" t12 12 "12" t13 13 "13" t14 14 "14" t15 15 "15" t16 16 "16" t17 17 "17" t18 18 "18" t19 19 "19" t20 20 "20" t21 21 "21" t22 22 "22" t23 23 "23" t24 24 "24" t25 25 "25" t26 26 "26" t27 27 "27" t28 28 "28" t29 29 "29" t30 30 "30" t31 31 "31" t32 32 "32" t33 33 "33" t34 34 "34" t35 35 "35" t36 36 "36" t37 37 "37" t38 38 "38" t39 39 "39" t40 40 "40" t41 41 "41" t42 42 "42" t43 43 "43" t44 44 "44" t45 45 "45" t46 46 "46" t47 47 "47" t48 48 "48" t49 49 "49" t50 50 "50" t51 51 "51" t52 52 "52" t53 53 "53" t54 54 "54" t55 55 "55" t56 56 "56" t57 57 "57" t58 58 "58" t59 59 "59" t60 60 "60" t61 61 "61" t62 62 "62" t63 63 "63" t64 64 "64" t65 65 "65" t66 66 "66" t67 67 "67" t68 68 "68" t69 69 "69" t70 70 "70" t71 71 "71" t72 72 "72" t73 73 "73" t74 74 "74" t75 75 "75" t76 76 "76" t77 77 "77" t78 78 "78" t79 79 "79" t80 80 "80" t81 81 "81" t82 82 "82" t83 83 "83" t84 84 "84" t85 85 "85" t86 86 "86" t87 87 "87" t88 88 "88" t89 89 "89" t90 90 "90" t91 91 "91" t92 92 "92" t93 93 "93" t94 94 "94" t95 95 "95" t96 96 "96" t97 97 "97" t98 98 "98" t99 99 "99" t100 100 "100" t101 101 "101" t102 102 "102" t103 103 "103" t104 104 "104" t105 105 "105" t106 106 "106" t107 107 "107" t108 108 "108" t109 109 "109" t110 110 "110" t111 111 "111" t112 112 "112" t113 113 "113" t114 114 "114" t115 115 "115" t116 116 "116" t117 117 "117" t118 118 "118" t119 119 "119" t120 120 "120" t121 121 "121" t122 122 "122" t123 123 "123" t124 124 "124" t125 125 "125" t126 126 "126" t127 127 "127" t128 128 "128" t129 129 "129" t130 130 "130" t131 131 "131" t132 132 "132" t133 133 "133" t134 134 "134" t135 135 "135" t136 136 "136" t137 137 "137" t138 138 "138" t139 139 "139" t140 140 "140" t141 141 "141" t142 142 "142" t143 143 "143" t144 144 "144" t145 145 "145" t146 146 "146" t147 147 "147" t148 148 "148" t149 149 "149" t150 150 "150" t151 151 "151" t152 152 "152" t153 153 "153" t154 154 "154" t155 155 "155" t156 156 "156" t157 157 "157" t158 158 "158" t159 159 "159" t160 160 "160" t161 161 "161" t162 162 "162" t163 163 "163" t164 164 "164" t165 165 "165" t166 166 "166" t167 167 "167" t168 168 "168" t169 169 "169" t170 170 "170" t171 171 "171" t172 172 "172" t173 173 "173" t174 174 "174" t175 175 "175" t176 176 "176" t177 177 "177" t178 178 "178" t179 179 "179" t180 180 "180" t181 181 "181" t182 182 "182" t183 183 "183" t184 184 "184" t185 185 "185" t186 186 "186" t187 187 "187" t188 188 "188" t189 189 "189" t190 190 "190" t191 191 "191" t192 192 "192" t193 193 "193" t194 194 "194" t195 195 "195" t196 196 "196" t197 197 "197" t198 198 "198" t199 199 "199" t200 200 "200" t201 201 "201" t202 202 "202" t203 203 "203" t204 204 "204" t205 205 "205" t206 206 "206" t207 207 "207" t208 208 "208" t209 209 "209" t210 210 "210" t211 211 "211" t212 212 "212" t213 213 "213" t214 214 "214" t215 215 "215" t216 216 "216" t217 217 "217" t218 218 "218" t219 219 "219" t220 220 "220" t221 221 "221" t222 222 "222" t223 223 "223" t224 224 "224" t225 225 "225" t226 226 "226" t227 227 "227" t228 228 "228" t229 229 "229" t230 230 "230" t231 231 "231" t232 232 "232" t233 233 "233" t234 234 "234" t235 235 "235" t236 236 "236" t237 237 "237" t238 238 "238" t239 239 "239" t240 240 "240" t241 241 "241" t242 242 "242" t243 243 "243" t244 244 "244" t245 245 "245" t246 246 "246" t247 247 "247" t248 248 "248" t249 249 "249" t250 250 "250" t251 251 "251" t252 252 "252" t253 253 "253" t254 254 "254" t255 255 "255" t256 256 "256" t257 257 "257" t258 258 "258" t259 259 "259" t260 260 "260" t261 261 "261" t262 262 "262" t263 263 "263" t264 264 "264" t265 265 "265" t266 266 "266" t267 267 "267" t268 268 "268" t269 269 "269" t270 270 "270" t271 271 "271" t272 272 "272" t273 273 "273" t274 274 "274" t275 275 "275" t276 276 "276" t277 277 "277" t278 278 "278" t279 279 "279" t280 280 "280" t281 281 "281" t282 282 "282" t283 283 "283" t284 284 "284" t285 285 "285" t286 286 "286" t287 287 "287" t288 288 "288" t289 289 "289" t290 290 "290" t291 291 "291" t292 292 "292" t293 293 "293" t294 294 "294" t295 295 "295" t296 296 "296" t297 297 "297" t298 298 "298" t299 299 "299" t300 300 "300" t301 301 "301" t302 302 "302" t303 303 "303" t304 304 "304" t305 305 "305" t306 306 "306" t307 307 "307" t308 308 "308" t309 309 "309" t310 310 "310" t311 311 "311" t312 312 "312" t313 313 "313" t314 314 "314" t315 315 "315" t316 316 "316" t317 317 "317" t318 318 "318" t319 319 "319" t320 320 "320" t321 321 "321" t322 322 "322" t323 323 "323" t324 324 "324" t325 325 "325" t326 326 "326" t327 327 "327" t328 328 "328" t329 329 "329" t330 330 "330" t331 331 "331" t332 332 "332" t333 333 "333" t334 334 "334" t335 335 "335" t336 336 "336" t337 337 "337" t338 338 "338" t339 339 "339" t340 340 "340" t341 341 "341" t342 342 "342" t343 343 "343" t344 344 "344" t345 345 "345" t346 346 "346" t347 347 "347" t348 348 "348" t349 349 "349" t350 350 "350" t351 351 "351" t352 352 "352" t353 353 "353" t354 354 "354" t355 355 "355" t356 356 "356" t357 357 "357" t358 358 "358" t359 359 "359" t360 360 "360" t361 361 "361" t362 362 "362" t363 363 "363" t364 364 "364" t365 365 "365" t366 366 "366" t367 367 "367" t368 368 "368" t369 369 "369" t370 370 "370" t371 371 "371" t372 372 "372" t373 373 "373" t374 374 "374" t375 375 "375" t376 376 "376" t377 377 "377" t378 378 "378" t379 379 "379" t380 380 "380" t381 381 "381" t382 382 "382" t383 383 "383" t384 384 "384" t385 385 "385" t386 386 "386" t387 387 "387" t388 388 "388" t389 389 "389" t390 390 "390" t391 391 "391" t392 392 "392" t393 393 "393" t394 394 "394" t395 395 "395" t396 396 "396" t397 397 "397" t398 398 "398" t399 399 "399" t400 400 "400" t401 401 "401" t402 402 "402" t403 403 "403" t404 404 "404" t405 405 "405" t406 406 "406" t407 407 "407" t408 408 "408" t409 409 "409" t410 410 "410" t411 411 "411" t412 412 "412" t413 413 "413" t414 414 "414" t415 415 "415" t416 416 "416" t417 417 "417" t418 418 "418" t419 419 "419" t420 420 "420" t421 421 "421" t422 422 "422" t423 423 "423" t424 424 "424" t425 425 "425" t426 426 "426" t427 427 "427" t428 428 "428" t429 429 "429" t430 430 "430" t431 431 "431" t432 432 "432" t433 433 "433" t434 434 "434" t435 435 "435" t436 436 "436" t437 437 "437" t438 438 "438" t439 439 "439" t440 440 "440" t441 441 "441" t442 442 "442" t443 443 "443" t444 444 "444" t445 445 "445" t446 446 "446" t447 447 "447" t448 448 "448" t449 449 "449" t450 450 "450" t451 451 "451" t452 452 "452" t453 453 "453" t454 454 "454" t455 455 "455" t456 456 "456" t457 457 "457" t458 458 "458" t459 459 "459" t460 460 "460" t461 461 "461" t462 462 "462" t463 463 "463" t464 464 "464" t465 465 "465" t466 466 "466" t467 467 "467" t468 468 "468" t469 469 "469" t470 470 "470" t471 471 "471" t472 472 "472" t473 473 "473" t474 474 "474" t475 475 "475" t476 476 "476" t477 477 "477" t478 478 "478" t479 479 "479" t480 480 "480" t481 481 "481" t482 482 "482" t483 483 "483" t484 484 "484" t485 485 "485" t486 486 "486" t487 487 "487" t488 488 "488" t489 489 "489" t490 490 "490" t491 491 "491" t492 492 "492" t493 493 "493" t494 494 "494" t495 495 "495" t496 496 "496" t497 497 "497" t498 498 "498" t499 499 "499" t500 500 "500" t501 501 "501" t502 502 "502" t503 503 "503" t504 504 "504" t505 505 "505" t506 506 "506" t507 507 "507" t508 508 "508" t509 509 "509" t510 510 "510" t511 511 "511" t512 512 "512" t513 513 "513" t514 514 "514" t515 515 "515" t516 516 "516" t517 517 "517" t518 518 "518" t519 519 "519" t520 520 "520" t521 521 "521" t522 522 "522" t523 523 "523" t524 524 "524" t525 525 "525" t526 526 "526" t527 527 "527" t528 528 "528" t529 529 "529" t530 530 "530" t531 531 "531" t532 532 "532" t533 533 "533" t534 534 "534" t535 535 "535" t536 536 "536" t537 537 "537" t538 538 "538" t539 539 "539" t540 540 "540" t541 541 "541" t542 542 "542" t543 543 "543" t544 544 "544" t545 545 "545" t546 546 "546" t547 547 "547" t548 548 "548" t549 549 "549" t550 550 "550" t551 551 "551" t552 552 "552" t553 553 "553" t554 554 "554" t555 555 "555" t556 556 "556" t557 557 "557" t558 558 "558" t559 559 "559" t560 560 "560" t561 561 "561" t562 562 "562" t563 563 "563" t564 564 "564" t565 565 "565" t566 566 "566" t567 567 "567" t568 568 "568" t569 569 "569" t570 570 "570" t571 571 "571" t572 572 "572" t573 573 "573" t574 574 "574" t575 575 "575" t576 576 "576" t577 577 "577" t578 578 "578" t579 579 "579" t580 580 "580" t581 581 "581" t582 582 "582" t583 583 "583" t584 584 "584" t585 585 "585" t586 586 "586" t587 587 "587" t588 588 "588" t589 589 "589" t590 590 "590" t591 591 "591" t592 592 "592" t593 593 "593" t594 594 "594" t595 595 "595" t596 596 "596" t597 597 "597" t598 598 "598" t599 599 "599" t600 600 "600" t601 601 "601" t602 602 "602" t603 603 "603" t604 604 "604" t605 605 "605" t606 606 "606" t607 607 "607" t608 608 "608" t609 609 "609" t610 610 "610" t611 611 "611" t612 612 "612" t613 613 "613" t614 614 "614" t615 615 "615" t616 616 "616" t617 617 "617" t618 618 "618" t619 619 "619" t620 620 "620" t621 621 "621" t622 622 "622" t623 623 "623" t624 624 "624" t625 625 "625" t626 626 "626" t627 627 "627" t628 628 "628" t629 629 "629" t630 630 "630" t631 631 "631" t632 632 "632" t633 633 "633" t634 634 "634" t635 635 "635" t636 636 "636" t637 637 "637" t638 638 "638" t639 639 "639" t640 640 "640" t641 641 "641" t642 642 "642" t643 643 "643" t644 644 "644" t645 645 "645" t646 646 "646" t647 647 "647" t648 648 "648" t649 649 "649" t650 650 "650" t651 651 "651" t652 652 "652" t653 653 "653" t654 654 "654" t655 655 "655" t656 656 "656" t657 657 "657" t658 658 "658" t659 659 "659" t660 660 "660" t661 661 "661" t662 662 "662" t663 663 "663" t664 664 "664" t665 665 "665" t666 666 "666" t667 667 "667" t668 668 "668" t669 669 "669" t670 670 "670" t671 671 "671" t672 672 "672" t673 673 "673" t674 674 "674" t675 675 "675" t676 676 "676" t677 677 "677" t678 678 "678" t679 679 "679" t680 680 "680" t681 681 "681" t682 682 "682" t683 683 "683" t684 684 "684" t685 685 "685" t686 686 "686" t687 687 "687" t688 688 "688" t689 689 "689" t690 690 "690" t691 691 "691" t692 692 "692" t693 693 "693" t694 694 "694" t695 695 "695" t696 696 "696" t697 697 "697" t698 698 "698" t699 699 "699" t700 700 "700" t701 701 "701" t702 702 "702" t703 703 "703" t704 704 "704" t705 705 "705" t706 706 "706" t707 707 "707" t708 708 "708" t709 709 "709" t710 710 "710" t711 711 "711" t712 712 "712" t713 713 "713" t714 714 "714" t715 715 "715" t716 716 "716" t717 717 "717" t718 718 "718" t719 719 "719" t720 720 "720" t721 721 "721" t722 722 "722" t723 723 "723" t724 724 "724" t725 725 "725" t726 726 "726" t727 727 "727" t728 728 "728" t729 729 "729" t730 730 "730" t731 731 "731" t732 732 "732" t733 733 "733" t734 734 "734" t735 735 "735" t736 736 "736" t737 737 "737" t738 738 "738" t739 739 "739" t740 740 "740" t741 741 "741" t742 742 "742" t743 743 "743" t744 744 "744" t745 745 "745" t746 746 "746" t747 747 "747" t748 748 "748" t749 749 "749" t750 750 "750" t751 751 "751" t752 752 "752" t753 753 "753" t754 754 "754" t755 755 "755" t756 756 "756" t757 757 "757" t758 758 "758" t759 759 "759" t760 760 "760" t761 761 "761" t762 762 "762" t763 763 "763" t764 764 "764" t765 765 "765" t766 766 "766" t767 767 "767" t768 768 "768" t769 769 "769" t770 770 "770" t771 771 "771" t772 772 "772" t773 773 "773" t774 774 "774" t775 775 "775" t776 776 "776" t777 777 "777" t778 778 "778" t779 779 "779" t780 780 "780" t781 781 "781" t782 782 "782" t783 783 "783" t784 784 "784" t785 785 "785" t786 786 "786" t787 787 "787" t788 788 "788" t789 789 "789" t790 790 "790" t791 791 "791" t792 792 "792" t793 793 "793" t794 794 "794" t795 795 "795" t796 796 "796" t797 797 "797" t798 798 "798" t799 799 "799" t800 800 "800" t801 801 "801" t802 802 "802" t803 803 "803" t804 804 "804" t805 805 "805" t806 806 "806" t807 807 "807" t808 808 "808" t809 809 "809" t810 810 "810" t811 811 "811" t812 812 "812" t813 813 "813" t814 814 "814" t815 815 "815" t816 816 "816" t817 817 "817" t818 818 "818" t819 819 "819" t820 820 "820" t821 821 "821" t822 822 "822" t823 823 "823" t824 824 "824" t825 825 "825" t826 826 "826" t827 827 "827" t828 828 "828" t829 829 "829" t830 830 "830" t831 831 "831" t832 832 "832" t833 833 "833" t834 834 "834" t835 835 "835" t836 836 "836" t837 837 "837" t838 838 "838" t839 839 "839" t840 840 "840" t841 841 "841" t842 842 "842" t843 843 "843" t844 844 "844" t845 845 "845" t846 846 "846" t847 847 "847" t848 848 "848" t849 849 "849" t850 850 "850" t851 851 "851" t852 852 "852" t853 853 "853" t854 854 "854" t855 855 "855" t856 856 "856" t857 857 "857" t858 858 "858" t859 859 "859" t860 860 "860" t861 861 "861" t862 862 "862" t863 863 "863" t864 864 "864" t865 865 "865" t866 866 "866" t867 867 "867" t868 868 "868" t869 869 "869" t870 870 "870" t871 871 "871" t872 872 "872" t873 873 "873" t874 874 "874" t875 875 "875" t876 876 "876" t877 877 "877" t878 878 "878" t879 879 "879" t880 880 "880" t881 881 "881" t882 882 "882" t883 883 "883" t884 884 "884" t885 885 "885" t886 886 "886" t887 887 "887" t888 888 "888" t889 889 "889" t890 890 "890" t891 891 "891" t892 892 "892" t893 893 "893" t894 894 "894" t895 895 "895" t896 896 "896" t897 897 "897" t898 898 "898" t899 899 "899" t900 900 "900" t901 901 "901" t902 902 "902" t903 903 "903" t904 904 "904" t905 905 "905" t906 906 "906" t907 907 "907" t908 908 "908" t909 909 "909" t910 910 "910" t911 911 "911" t912 912 "912" t913 913 "913" t914 914 "914" t915 915 "915" t916 916 "916" t917 917 "917" t918 918 "918" t919 919 "919" t920 920 "920" t921 921 "921" t922 922 "922" t923 923 "923" t924 924 "924" t925 925 "925" t926 926 "926" t927 927 "927" t928 928 "928" t929 929 "929" t930 930 "930" t931 931 "931" t932 932 "932" t933 933 "933" t934 934 "934" t935 935 "935" t936 936 "936" t937 937 "937" t938 938 "938" t939 939 "939" t940 940 "940" t941 941 "941" t942 942 "942" t943 943 "943" t944 944 "944" t945 945 "945" t946 946 "946" t947 947 "947" t948 948 "948" t949 949 "949" t950 950 "950" t951 951 "951" t952 952 "952" t953 953 "953" t954 954 "954" t955 955 "955" t956 956 "956" t957 957 "957" t958 958 "958" t959 959 "959" t960 960 "960" t961 961 "961" t962 962 "962" t963 963 "963" t964 964 "964" t965 965 "965" t966 966 "966" t967 967 "967" t968 968 "968" t969 969 "969" t970 970 "970" t971 971 "971" t972 972 "972" t973 973 "973" t974 974 "974" t975 975 "975" t976 976 "976" t977 977 "977" t978 978 "978" t979 979 "979" t980 980 "980" t981 981 "981" t982 982 "982" t983 983 "983" t984 984 "984" t985 985 "985" t986 986 "986" t987 987 "987" t988 988 "988" t989 989 "989" t990 990 "990" t991 991 "991" t992 992 "992" t993 993 "993" t994 994 "994" t995 995 "995" t996 996 "996" t997 997 "997" t998 998 "998" t999 999 "999" t1000 1000 "1000" %% input: exp { assert ($1 == 1); $$ = $1; } | input exp { assert ($2 == $1 + 1); $$ = $2; } ; exp: n1 "1" { assert ($1 == 1); $$ = $1; } | n2 "2" { assert ($1 == 2); $$ = $1; } | n3 "3" { assert ($1 == 3); $$ = $1; } | n4 "4" { assert ($1 == 4); $$ = $1; } | n5 "5" { assert ($1 == 5); $$ = $1; } | n6 "6" { assert ($1 == 6); $$ = $1; } | n7 "7" { assert ($1 == 7); $$ = $1; } | n8 "8" { assert ($1 == 8); $$ = $1; } | n9 "9" { assert ($1 == 9); $$ = $1; } | n10 "10" { assert ($1 == 10); $$ = $1; } | n11 "11" { assert ($1 == 11); $$ = $1; } | n12 "12" { assert ($1 == 12); $$ = $1; } | n13 "13" { assert ($1 == 13); $$ = $1; } | n14 "14" { assert ($1 == 14); $$ = $1; } | n15 "15" { assert ($1 == 15); $$ = $1; } | n16 "16" { assert ($1 == 16); $$ = $1; } | n17 "17" { assert ($1 == 17); $$ = $1; } | n18 "18" { assert ($1 == 18); $$ = $1; } | n19 "19" { assert ($1 == 19); $$ = $1; } | n20 "20" { assert ($1 == 20); $$ = $1; } | n21 "21" { assert ($1 == 21); $$ = $1; } | n22 "22" { assert ($1 == 22); $$ = $1; } | n23 "23" { assert ($1 == 23); $$ = $1; } | n24 "24" { assert ($1 == 24); $$ = $1; } | n25 "25" { assert ($1 == 25); $$ = $1; } | n26 "26" { assert ($1 == 26); $$ = $1; } | n27 "27" { assert ($1 == 27); $$ = $1; } | n28 "28" { assert ($1 == 28); $$ = $1; } | n29 "29" { assert ($1 == 29); $$ = $1; } | n30 "30" { assert ($1 == 30); $$ = $1; } | n31 "31" { assert ($1 == 31); $$ = $1; } | n32 "32" { assert ($1 == 32); $$ = $1; } | n33 "33" { assert ($1 == 33); $$ = $1; } | n34 "34" { assert ($1 == 34); $$ = $1; } | n35 "35" { assert ($1 == 35); $$ = $1; } | n36 "36" { assert ($1 == 36); $$ = $1; } | n37 "37" { assert ($1 == 37); $$ = $1; } | n38 "38" { assert ($1 == 38); $$ = $1; } | n39 "39" { assert ($1 == 39); $$ = $1; } | n40 "40" { assert ($1 == 40); $$ = $1; } | n41 "41" { assert ($1 == 41); $$ = $1; } | n42 "42" { assert ($1 == 42); $$ = $1; } | n43 "43" { assert ($1 == 43); $$ = $1; } | n44 "44" { assert ($1 == 44); $$ = $1; } | n45 "45" { assert ($1 == 45); $$ = $1; } | n46 "46" { assert ($1 == 46); $$ = $1; } | n47 "47" { assert ($1 == 47); $$ = $1; } | n48 "48" { assert ($1 == 48); $$ = $1; } | n49 "49" { assert ($1 == 49); $$ = $1; } | n50 "50" { assert ($1 == 50); $$ = $1; } | n51 "51" { assert ($1 == 51); $$ = $1; } | n52 "52" { assert ($1 == 52); $$ = $1; } | n53 "53" { assert ($1 == 53); $$ = $1; } | n54 "54" { assert ($1 == 54); $$ = $1; } | n55 "55" { assert ($1 == 55); $$ = $1; } | n56 "56" { assert ($1 == 56); $$ = $1; } | n57 "57" { assert ($1 == 57); $$ = $1; } | n58 "58" { assert ($1 == 58); $$ = $1; } | n59 "59" { assert ($1 == 59); $$ = $1; } | n60 "60" { assert ($1 == 60); $$ = $1; } | n61 "61" { assert ($1 == 61); $$ = $1; } | n62 "62" { assert ($1 == 62); $$ = $1; } | n63 "63" { assert ($1 == 63); $$ = $1; } | n64 "64" { assert ($1 == 64); $$ = $1; } | n65 "65" { assert ($1 == 65); $$ = $1; } | n66 "66" { assert ($1 == 66); $$ = $1; } | n67 "67" { assert ($1 == 67); $$ = $1; } | n68 "68" { assert ($1 == 68); $$ = $1; } | n69 "69" { assert ($1 == 69); $$ = $1; } | n70 "70" { assert ($1 == 70); $$ = $1; } | n71 "71" { assert ($1 == 71); $$ = $1; } | n72 "72" { assert ($1 == 72); $$ = $1; } | n73 "73" { assert ($1 == 73); $$ = $1; } | n74 "74" { assert ($1 == 74); $$ = $1; } | n75 "75" { assert ($1 == 75); $$ = $1; } | n76 "76" { assert ($1 == 76); $$ = $1; } | n77 "77" { assert ($1 == 77); $$ = $1; } | n78 "78" { assert ($1 == 78); $$ = $1; } | n79 "79" { assert ($1 == 79); $$ = $1; } | n80 "80" { assert ($1 == 80); $$ = $1; } | n81 "81" { assert ($1 == 81); $$ = $1; } | n82 "82" { assert ($1 == 82); $$ = $1; } | n83 "83" { assert ($1 == 83); $$ = $1; } | n84 "84" { assert ($1 == 84); $$ = $1; } | n85 "85" { assert ($1 == 85); $$ = $1; } | n86 "86" { assert ($1 == 86); $$ = $1; } | n87 "87" { assert ($1 == 87); $$ = $1; } | n88 "88" { assert ($1 == 88); $$ = $1; } | n89 "89" { assert ($1 == 89); $$ = $1; } | n90 "90" { assert ($1 == 90); $$ = $1; } | n91 "91" { assert ($1 == 91); $$ = $1; } | n92 "92" { assert ($1 == 92); $$ = $1; } | n93 "93" { assert ($1 == 93); $$ = $1; } | n94 "94" { assert ($1 == 94); $$ = $1; } | n95 "95" { assert ($1 == 95); $$ = $1; } | n96 "96" { assert ($1 == 96); $$ = $1; } | n97 "97" { assert ($1 == 97); $$ = $1; } | n98 "98" { assert ($1 == 98); $$ = $1; } | n99 "99" { assert ($1 == 99); $$ = $1; } | n100 "100" { assert ($1 == 100); $$ = $1; } | n101 "101" { assert ($1 == 101); $$ = $1; } | n102 "102" { assert ($1 == 102); $$ = $1; } | n103 "103" { assert ($1 == 103); $$ = $1; } | n104 "104" { assert ($1 == 104); $$ = $1; } | n105 "105" { assert ($1 == 105); $$ = $1; } | n106 "106" { assert ($1 == 106); $$ = $1; } | n107 "107" { assert ($1 == 107); $$ = $1; } | n108 "108" { assert ($1 == 108); $$ = $1; } | n109 "109" { assert ($1 == 109); $$ = $1; } | n110 "110" { assert ($1 == 110); $$ = $1; } | n111 "111" { assert ($1 == 111); $$ = $1; } | n112 "112" { assert ($1 == 112); $$ = $1; } | n113 "113" { assert ($1 == 113); $$ = $1; } | n114 "114" { assert ($1 == 114); $$ = $1; } | n115 "115" { assert ($1 == 115); $$ = $1; } | n116 "116" { assert ($1 == 116); $$ = $1; } | n117 "117" { assert ($1 == 117); $$ = $1; } | n118 "118" { assert ($1 == 118); $$ = $1; } | n119 "119" { assert ($1 == 119); $$ = $1; } | n120 "120" { assert ($1 == 120); $$ = $1; } | n121 "121" { assert ($1 == 121); $$ = $1; } | n122 "122" { assert ($1 == 122); $$ = $1; } | n123 "123" { assert ($1 == 123); $$ = $1; } | n124 "124" { assert ($1 == 124); $$ = $1; } | n125 "125" { assert ($1 == 125); $$ = $1; } | n126 "126" { assert ($1 == 126); $$ = $1; } | n127 "127" { assert ($1 == 127); $$ = $1; } | n128 "128" { assert ($1 == 128); $$ = $1; } | n129 "129" { assert ($1 == 129); $$ = $1; } | n130 "130" { assert ($1 == 130); $$ = $1; } | n131 "131" { assert ($1 == 131); $$ = $1; } | n132 "132" { assert ($1 == 132); $$ = $1; } | n133 "133" { assert ($1 == 133); $$ = $1; } | n134 "134" { assert ($1 == 134); $$ = $1; } | n135 "135" { assert ($1 == 135); $$ = $1; } | n136 "136" { assert ($1 == 136); $$ = $1; } | n137 "137" { assert ($1 == 137); $$ = $1; } | n138 "138" { assert ($1 == 138); $$ = $1; } | n139 "139" { assert ($1 == 139); $$ = $1; } | n140 "140" { assert ($1 == 140); $$ = $1; } | n141 "141" { assert ($1 == 141); $$ = $1; } | n142 "142" { assert ($1 == 142); $$ = $1; } | n143 "143" { assert ($1 == 143); $$ = $1; } | n144 "144" { assert ($1 == 144); $$ = $1; } | n145 "145" { assert ($1 == 145); $$ = $1; } | n146 "146" { assert ($1 == 146); $$ = $1; } | n147 "147" { assert ($1 == 147); $$ = $1; } | n148 "148" { assert ($1 == 148); $$ = $1; } | n149 "149" { assert ($1 == 149); $$ = $1; } | n150 "150" { assert ($1 == 150); $$ = $1; } | n151 "151" { assert ($1 == 151); $$ = $1; } | n152 "152" { assert ($1 == 152); $$ = $1; } | n153 "153" { assert ($1 == 153); $$ = $1; } | n154 "154" { assert ($1 == 154); $$ = $1; } | n155 "155" { assert ($1 == 155); $$ = $1; } | n156 "156" { assert ($1 == 156); $$ = $1; } | n157 "157" { assert ($1 == 157); $$ = $1; } | n158 "158" { assert ($1 == 158); $$ = $1; } | n159 "159" { assert ($1 == 159); $$ = $1; } | n160 "160" { assert ($1 == 160); $$ = $1; } | n161 "161" { assert ($1 == 161); $$ = $1; } | n162 "162" { assert ($1 == 162); $$ = $1; } | n163 "163" { assert ($1 == 163); $$ = $1; } | n164 "164" { assert ($1 == 164); $$ = $1; } | n165 "165" { assert ($1 == 165); $$ = $1; } | n166 "166" { assert ($1 == 166); $$ = $1; } | n167 "167" { assert ($1 == 167); $$ = $1; } | n168 "168" { assert ($1 == 168); $$ = $1; } | n169 "169" { assert ($1 == 169); $$ = $1; } | n170 "170" { assert ($1 == 170); $$ = $1; } | n171 "171" { assert ($1 == 171); $$ = $1; } | n172 "172" { assert ($1 == 172); $$ = $1; } | n173 "173" { assert ($1 == 173); $$ = $1; } | n174 "174" { assert ($1 == 174); $$ = $1; } | n175 "175" { assert ($1 == 175); $$ = $1; } | n176 "176" { assert ($1 == 176); $$ = $1; } | n177 "177" { assert ($1 == 177); $$ = $1; } | n178 "178" { assert ($1 == 178); $$ = $1; } | n179 "179" { assert ($1 == 179); $$ = $1; } | n180 "180" { assert ($1 == 180); $$ = $1; } | n181 "181" { assert ($1 == 181); $$ = $1; } | n182 "182" { assert ($1 == 182); $$ = $1; } | n183 "183" { assert ($1 == 183); $$ = $1; } | n184 "184" { assert ($1 == 184); $$ = $1; } | n185 "185" { assert ($1 == 185); $$ = $1; } | n186 "186" { assert ($1 == 186); $$ = $1; } | n187 "187" { assert ($1 == 187); $$ = $1; } | n188 "188" { assert ($1 == 188); $$ = $1; } | n189 "189" { assert ($1 == 189); $$ = $1; } | n190 "190" { assert ($1 == 190); $$ = $1; } | n191 "191" { assert ($1 == 191); $$ = $1; } | n192 "192" { assert ($1 == 192); $$ = $1; } | n193 "193" { assert ($1 == 193); $$ = $1; } | n194 "194" { assert ($1 == 194); $$ = $1; } | n195 "195" { assert ($1 == 195); $$ = $1; } | n196 "196" { assert ($1 == 196); $$ = $1; } | n197 "197" { assert ($1 == 197); $$ = $1; } | n198 "198" { assert ($1 == 198); $$ = $1; } | n199 "199" { assert ($1 == 199); $$ = $1; } | n200 "200" { assert ($1 == 200); $$ = $1; } | n201 "201" { assert ($1 == 201); $$ = $1; } | n202 "202" { assert ($1 == 202); $$ = $1; } | n203 "203" { assert ($1 == 203); $$ = $1; } | n204 "204" { assert ($1 == 204); $$ = $1; } | n205 "205" { assert ($1 == 205); $$ = $1; } | n206 "206" { assert ($1 == 206); $$ = $1; } | n207 "207" { assert ($1 == 207); $$ = $1; } | n208 "208" { assert ($1 == 208); $$ = $1; } | n209 "209" { assert ($1 == 209); $$ = $1; } | n210 "210" { assert ($1 == 210); $$ = $1; } | n211 "211" { assert ($1 == 211); $$ = $1; } | n212 "212" { assert ($1 == 212); $$ = $1; } | n213 "213" { assert ($1 == 213); $$ = $1; } | n214 "214" { assert ($1 == 214); $$ = $1; } | n215 "215" { assert ($1 == 215); $$ = $1; } | n216 "216" { assert ($1 == 216); $$ = $1; } | n217 "217" { assert ($1 == 217); $$ = $1; } | n218 "218" { assert ($1 == 218); $$ = $1; } | n219 "219" { assert ($1 == 219); $$ = $1; } | n220 "220" { assert ($1 == 220); $$ = $1; } | n221 "221" { assert ($1 == 221); $$ = $1; } | n222 "222" { assert ($1 == 222); $$ = $1; } | n223 "223" { assert ($1 == 223); $$ = $1; } | n224 "224" { assert ($1 == 224); $$ = $1; } | n225 "225" { assert ($1 == 225); $$ = $1; } | n226 "226" { assert ($1 == 226); $$ = $1; } | n227 "227" { assert ($1 == 227); $$ = $1; } | n228 "228" { assert ($1 == 228); $$ = $1; } | n229 "229" { assert ($1 == 229); $$ = $1; } | n230 "230" { assert ($1 == 230); $$ = $1; } | n231 "231" { assert ($1 == 231); $$ = $1; } | n232 "232" { assert ($1 == 232); $$ = $1; } | n233 "233" { assert ($1 == 233); $$ = $1; } | n234 "234" { assert ($1 == 234); $$ = $1; } | n235 "235" { assert ($1 == 235); $$ = $1; } | n236 "236" { assert ($1 == 236); $$ = $1; } | n237 "237" { assert ($1 == 237); $$ = $1; } | n238 "238" { assert ($1 == 238); $$ = $1; } | n239 "239" { assert ($1 == 239); $$ = $1; } | n240 "240" { assert ($1 == 240); $$ = $1; } | n241 "241" { assert ($1 == 241); $$ = $1; } | n242 "242" { assert ($1 == 242); $$ = $1; } | n243 "243" { assert ($1 == 243); $$ = $1; } | n244 "244" { assert ($1 == 244); $$ = $1; } | n245 "245" { assert ($1 == 245); $$ = $1; } | n246 "246" { assert ($1 == 246); $$ = $1; } | n247 "247" { assert ($1 == 247); $$ = $1; } | n248 "248" { assert ($1 == 248); $$ = $1; } | n249 "249" { assert ($1 == 249); $$ = $1; } | n250 "250" { assert ($1 == 250); $$ = $1; } | n251 "251" { assert ($1 == 251); $$ = $1; } | n252 "252" { assert ($1 == 252); $$ = $1; } | n253 "253" { assert ($1 == 253); $$ = $1; } | n254 "254" { assert ($1 == 254); $$ = $1; } | n255 "255" { assert ($1 == 255); $$ = $1; } | n256 "256" { assert ($1 == 256); $$ = $1; } | n257 "257" { assert ($1 == 257); $$ = $1; } | n258 "258" { assert ($1 == 258); $$ = $1; } | n259 "259" { assert ($1 == 259); $$ = $1; } | n260 "260" { assert ($1 == 260); $$ = $1; } | n261 "261" { assert ($1 == 261); $$ = $1; } | n262 "262" { assert ($1 == 262); $$ = $1; } | n263 "263" { assert ($1 == 263); $$ = $1; } | n264 "264" { assert ($1 == 264); $$ = $1; } | n265 "265" { assert ($1 == 265); $$ = $1; } | n266 "266" { assert ($1 == 266); $$ = $1; } | n267 "267" { assert ($1 == 267); $$ = $1; } | n268 "268" { assert ($1 == 268); $$ = $1; } | n269 "269" { assert ($1 == 269); $$ = $1; } | n270 "270" { assert ($1 == 270); $$ = $1; } | n271 "271" { assert ($1 == 271); $$ = $1; } | n272 "272" { assert ($1 == 272); $$ = $1; } | n273 "273" { assert ($1 == 273); $$ = $1; } | n274 "274" { assert ($1 == 274); $$ = $1; } | n275 "275" { assert ($1 == 275); $$ = $1; } | n276 "276" { assert ($1 == 276); $$ = $1; } | n277 "277" { assert ($1 == 277); $$ = $1; } | n278 "278" { assert ($1 == 278); $$ = $1; } | n279 "279" { assert ($1 == 279); $$ = $1; } | n280 "280" { assert ($1 == 280); $$ = $1; } | n281 "281" { assert ($1 == 281); $$ = $1; } | n282 "282" { assert ($1 == 282); $$ = $1; } | n283 "283" { assert ($1 == 283); $$ = $1; } | n284 "284" { assert ($1 == 284); $$ = $1; } | n285 "285" { assert ($1 == 285); $$ = $1; } | n286 "286" { assert ($1 == 286); $$ = $1; } | n287 "287" { assert ($1 == 287); $$ = $1; } | n288 "288" { assert ($1 == 288); $$ = $1; } | n289 "289" { assert ($1 == 289); $$ = $1; } | n290 "290" { assert ($1 == 290); $$ = $1; } | n291 "291" { assert ($1 == 291); $$ = $1; } | n292 "292" { assert ($1 == 292); $$ = $1; } | n293 "293" { assert ($1 == 293); $$ = $1; } | n294 "294" { assert ($1 == 294); $$ = $1; } | n295 "295" { assert ($1 == 295); $$ = $1; } | n296 "296" { assert ($1 == 296); $$ = $1; } | n297 "297" { assert ($1 == 297); $$ = $1; } | n298 "298" { assert ($1 == 298); $$ = $1; } | n299 "299" { assert ($1 == 299); $$ = $1; } | n300 "300" { assert ($1 == 300); $$ = $1; } | n301 "301" { assert ($1 == 301); $$ = $1; } | n302 "302" { assert ($1 == 302); $$ = $1; } | n303 "303" { assert ($1 == 303); $$ = $1; } | n304 "304" { assert ($1 == 304); $$ = $1; } | n305 "305" { assert ($1 == 305); $$ = $1; } | n306 "306" { assert ($1 == 306); $$ = $1; } | n307 "307" { assert ($1 == 307); $$ = $1; } | n308 "308" { assert ($1 == 308); $$ = $1; } | n309 "309" { assert ($1 == 309); $$ = $1; } | n310 "310" { assert ($1 == 310); $$ = $1; } | n311 "311" { assert ($1 == 311); $$ = $1; } | n312 "312" { assert ($1 == 312); $$ = $1; } | n313 "313" { assert ($1 == 313); $$ = $1; } | n314 "314" { assert ($1 == 314); $$ = $1; } | n315 "315" { assert ($1 == 315); $$ = $1; } | n316 "316" { assert ($1 == 316); $$ = $1; } | n317 "317" { assert ($1 == 317); $$ = $1; } | n318 "318" { assert ($1 == 318); $$ = $1; } | n319 "319" { assert ($1 == 319); $$ = $1; } | n320 "320" { assert ($1 == 320); $$ = $1; } | n321 "321" { assert ($1 == 321); $$ = $1; } | n322 "322" { assert ($1 == 322); $$ = $1; } | n323 "323" { assert ($1 == 323); $$ = $1; } | n324 "324" { assert ($1 == 324); $$ = $1; } | n325 "325" { assert ($1 == 325); $$ = $1; } | n326 "326" { assert ($1 == 326); $$ = $1; } | n327 "327" { assert ($1 == 327); $$ = $1; } | n328 "328" { assert ($1 == 328); $$ = $1; } | n329 "329" { assert ($1 == 329); $$ = $1; } | n330 "330" { assert ($1 == 330); $$ = $1; } | n331 "331" { assert ($1 == 331); $$ = $1; } | n332 "332" { assert ($1 == 332); $$ = $1; } | n333 "333" { assert ($1 == 333); $$ = $1; } | n334 "334" { assert ($1 == 334); $$ = $1; } | n335 "335" { assert ($1 == 335); $$ = $1; } | n336 "336" { assert ($1 == 336); $$ = $1; } | n337 "337" { assert ($1 == 337); $$ = $1; } | n338 "338" { assert ($1 == 338); $$ = $1; } | n339 "339" { assert ($1 == 339); $$ = $1; } | n340 "340" { assert ($1 == 340); $$ = $1; } | n341 "341" { assert ($1 == 341); $$ = $1; } | n342 "342" { assert ($1 == 342); $$ = $1; } | n343 "343" { assert ($1 == 343); $$ = $1; } | n344 "344" { assert ($1 == 344); $$ = $1; } | n345 "345" { assert ($1 == 345); $$ = $1; } | n346 "346" { assert ($1 == 346); $$ = $1; } | n347 "347" { assert ($1 == 347); $$ = $1; } | n348 "348" { assert ($1 == 348); $$ = $1; } | n349 "349" { assert ($1 == 349); $$ = $1; } | n350 "350" { assert ($1 == 350); $$ = $1; } | n351 "351" { assert ($1 == 351); $$ = $1; } | n352 "352" { assert ($1 == 352); $$ = $1; } | n353 "353" { assert ($1 == 353); $$ = $1; } | n354 "354" { assert ($1 == 354); $$ = $1; } | n355 "355" { assert ($1 == 355); $$ = $1; } | n356 "356" { assert ($1 == 356); $$ = $1; } | n357 "357" { assert ($1 == 357); $$ = $1; } | n358 "358" { assert ($1 == 358); $$ = $1; } | n359 "359" { assert ($1 == 359); $$ = $1; } | n360 "360" { assert ($1 == 360); $$ = $1; } | n361 "361" { assert ($1 == 361); $$ = $1; } | n362 "362" { assert ($1 == 362); $$ = $1; } | n363 "363" { assert ($1 == 363); $$ = $1; } | n364 "364" { assert ($1 == 364); $$ = $1; } | n365 "365" { assert ($1 == 365); $$ = $1; } | n366 "366" { assert ($1 == 366); $$ = $1; } | n367 "367" { assert ($1 == 367); $$ = $1; } | n368 "368" { assert ($1 == 368); $$ = $1; } | n369 "369" { assert ($1 == 369); $$ = $1; } | n370 "370" { assert ($1 == 370); $$ = $1; } | n371 "371" { assert ($1 == 371); $$ = $1; } | n372 "372" { assert ($1 == 372); $$ = $1; } | n373 "373" { assert ($1 == 373); $$ = $1; } | n374 "374" { assert ($1 == 374); $$ = $1; } | n375 "375" { assert ($1 == 375); $$ = $1; } | n376 "376" { assert ($1 == 376); $$ = $1; } | n377 "377" { assert ($1 == 377); $$ = $1; } | n378 "378" { assert ($1 == 378); $$ = $1; } | n379 "379" { assert ($1 == 379); $$ = $1; } | n380 "380" { assert ($1 == 380); $$ = $1; } | n381 "381" { assert ($1 == 381); $$ = $1; } | n382 "382" { assert ($1 == 382); $$ = $1; } | n383 "383" { assert ($1 == 383); $$ = $1; } | n384 "384" { assert ($1 == 384); $$ = $1; } | n385 "385" { assert ($1 == 385); $$ = $1; } | n386 "386" { assert ($1 == 386); $$ = $1; } | n387 "387" { assert ($1 == 387); $$ = $1; } | n388 "388" { assert ($1 == 388); $$ = $1; } | n389 "389" { assert ($1 == 389); $$ = $1; } | n390 "390" { assert ($1 == 390); $$ = $1; } | n391 "391" { assert ($1 == 391); $$ = $1; } | n392 "392" { assert ($1 == 392); $$ = $1; } | n393 "393" { assert ($1 == 393); $$ = $1; } | n394 "394" { assert ($1 == 394); $$ = $1; } | n395 "395" { assert ($1 == 395); $$ = $1; } | n396 "396" { assert ($1 == 396); $$ = $1; } | n397 "397" { assert ($1 == 397); $$ = $1; } | n398 "398" { assert ($1 == 398); $$ = $1; } | n399 "399" { assert ($1 == 399); $$ = $1; } | n400 "400" { assert ($1 == 400); $$ = $1; } | n401 "401" { assert ($1 == 401); $$ = $1; } | n402 "402" { assert ($1 == 402); $$ = $1; } | n403 "403" { assert ($1 == 403); $$ = $1; } | n404 "404" { assert ($1 == 404); $$ = $1; } | n405 "405" { assert ($1 == 405); $$ = $1; } | n406 "406" { assert ($1 == 406); $$ = $1; } | n407 "407" { assert ($1 == 407); $$ = $1; } | n408 "408" { assert ($1 == 408); $$ = $1; } | n409 "409" { assert ($1 == 409); $$ = $1; } | n410 "410" { assert ($1 == 410); $$ = $1; } | n411 "411" { assert ($1 == 411); $$ = $1; } | n412 "412" { assert ($1 == 412); $$ = $1; } | n413 "413" { assert ($1 == 413); $$ = $1; } | n414 "414" { assert ($1 == 414); $$ = $1; } | n415 "415" { assert ($1 == 415); $$ = $1; } | n416 "416" { assert ($1 == 416); $$ = $1; } | n417 "417" { assert ($1 == 417); $$ = $1; } | n418 "418" { assert ($1 == 418); $$ = $1; } | n419 "419" { assert ($1 == 419); $$ = $1; } | n420 "420" { assert ($1 == 420); $$ = $1; } | n421 "421" { assert ($1 == 421); $$ = $1; } | n422 "422" { assert ($1 == 422); $$ = $1; } | n423 "423" { assert ($1 == 423); $$ = $1; } | n424 "424" { assert ($1 == 424); $$ = $1; } | n425 "425" { assert ($1 == 425); $$ = $1; } | n426 "426" { assert ($1 == 426); $$ = $1; } | n427 "427" { assert ($1 == 427); $$ = $1; } | n428 "428" { assert ($1 == 428); $$ = $1; } | n429 "429" { assert ($1 == 429); $$ = $1; } | n430 "430" { assert ($1 == 430); $$ = $1; } | n431 "431" { assert ($1 == 431); $$ = $1; } | n432 "432" { assert ($1 == 432); $$ = $1; } | n433 "433" { assert ($1 == 433); $$ = $1; } | n434 "434" { assert ($1 == 434); $$ = $1; } | n435 "435" { assert ($1 == 435); $$ = $1; } | n436 "436" { assert ($1 == 436); $$ = $1; } | n437 "437" { assert ($1 == 437); $$ = $1; } | n438 "438" { assert ($1 == 438); $$ = $1; } | n439 "439" { assert ($1 == 439); $$ = $1; } | n440 "440" { assert ($1 == 440); $$ = $1; } | n441 "441" { assert ($1 == 441); $$ = $1; } | n442 "442" { assert ($1 == 442); $$ = $1; } | n443 "443" { assert ($1 == 443); $$ = $1; } | n444 "444" { assert ($1 == 444); $$ = $1; } | n445 "445" { assert ($1 == 445); $$ = $1; } | n446 "446" { assert ($1 == 446); $$ = $1; } | n447 "447" { assert ($1 == 447); $$ = $1; } | n448 "448" { assert ($1 == 448); $$ = $1; } | n449 "449" { assert ($1 == 449); $$ = $1; } | n450 "450" { assert ($1 == 450); $$ = $1; } | n451 "451" { assert ($1 == 451); $$ = $1; } | n452 "452" { assert ($1 == 452); $$ = $1; } | n453 "453" { assert ($1 == 453); $$ = $1; } | n454 "454" { assert ($1 == 454); $$ = $1; } | n455 "455" { assert ($1 == 455); $$ = $1; } | n456 "456" { assert ($1 == 456); $$ = $1; } | n457 "457" { assert ($1 == 457); $$ = $1; } | n458 "458" { assert ($1 == 458); $$ = $1; } | n459 "459" { assert ($1 == 459); $$ = $1; } | n460 "460" { assert ($1 == 460); $$ = $1; } | n461 "461" { assert ($1 == 461); $$ = $1; } | n462 "462" { assert ($1 == 462); $$ = $1; } | n463 "463" { assert ($1 == 463); $$ = $1; } | n464 "464" { assert ($1 == 464); $$ = $1; } | n465 "465" { assert ($1 == 465); $$ = $1; } | n466 "466" { assert ($1 == 466); $$ = $1; } | n467 "467" { assert ($1 == 467); $$ = $1; } | n468 "468" { assert ($1 == 468); $$ = $1; } | n469 "469" { assert ($1 == 469); $$ = $1; } | n470 "470" { assert ($1 == 470); $$ = $1; } | n471 "471" { assert ($1 == 471); $$ = $1; } | n472 "472" { assert ($1 == 472); $$ = $1; } | n473 "473" { assert ($1 == 473); $$ = $1; } | n474 "474" { assert ($1 == 474); $$ = $1; } | n475 "475" { assert ($1 == 475); $$ = $1; } | n476 "476" { assert ($1 == 476); $$ = $1; } | n477 "477" { assert ($1 == 477); $$ = $1; } | n478 "478" { assert ($1 == 478); $$ = $1; } | n479 "479" { assert ($1 == 479); $$ = $1; } | n480 "480" { assert ($1 == 480); $$ = $1; } | n481 "481" { assert ($1 == 481); $$ = $1; } | n482 "482" { assert ($1 == 482); $$ = $1; } | n483 "483" { assert ($1 == 483); $$ = $1; } | n484 "484" { assert ($1 == 484); $$ = $1; } | n485 "485" { assert ($1 == 485); $$ = $1; } | n486 "486" { assert ($1 == 486); $$ = $1; } | n487 "487" { assert ($1 == 487); $$ = $1; } | n488 "488" { assert ($1 == 488); $$ = $1; } | n489 "489" { assert ($1 == 489); $$ = $1; } | n490 "490" { assert ($1 == 490); $$ = $1; } | n491 "491" { assert ($1 == 491); $$ = $1; } | n492 "492" { assert ($1 == 492); $$ = $1; } | n493 "493" { assert ($1 == 493); $$ = $1; } | n494 "494" { assert ($1 == 494); $$ = $1; } | n495 "495" { assert ($1 == 495); $$ = $1; } | n496 "496" { assert ($1 == 496); $$ = $1; } | n497 "497" { assert ($1 == 497); $$ = $1; } | n498 "498" { assert ($1 == 498); $$ = $1; } | n499 "499" { assert ($1 == 499); $$ = $1; } | n500 "500" { assert ($1 == 500); $$ = $1; } | n501 "501" { assert ($1 == 501); $$ = $1; } | n502 "502" { assert ($1 == 502); $$ = $1; } | n503 "503" { assert ($1 == 503); $$ = $1; } | n504 "504" { assert ($1 == 504); $$ = $1; } | n505 "505" { assert ($1 == 505); $$ = $1; } | n506 "506" { assert ($1 == 506); $$ = $1; } | n507 "507" { assert ($1 == 507); $$ = $1; } | n508 "508" { assert ($1 == 508); $$ = $1; } | n509 "509" { assert ($1 == 509); $$ = $1; } | n510 "510" { assert ($1 == 510); $$ = $1; } | n511 "511" { assert ($1 == 511); $$ = $1; } | n512 "512" { assert ($1 == 512); $$ = $1; } | n513 "513" { assert ($1 == 513); $$ = $1; } | n514 "514" { assert ($1 == 514); $$ = $1; } | n515 "515" { assert ($1 == 515); $$ = $1; } | n516 "516" { assert ($1 == 516); $$ = $1; } | n517 "517" { assert ($1 == 517); $$ = $1; } | n518 "518" { assert ($1 == 518); $$ = $1; } | n519 "519" { assert ($1 == 519); $$ = $1; } | n520 "520" { assert ($1 == 520); $$ = $1; } | n521 "521" { assert ($1 == 521); $$ = $1; } | n522 "522" { assert ($1 == 522); $$ = $1; } | n523 "523" { assert ($1 == 523); $$ = $1; } | n524 "524" { assert ($1 == 524); $$ = $1; } | n525 "525" { assert ($1 == 525); $$ = $1; } | n526 "526" { assert ($1 == 526); $$ = $1; } | n527 "527" { assert ($1 == 527); $$ = $1; } | n528 "528" { assert ($1 == 528); $$ = $1; } | n529 "529" { assert ($1 == 529); $$ = $1; } | n530 "530" { assert ($1 == 530); $$ = $1; } | n531 "531" { assert ($1 == 531); $$ = $1; } | n532 "532" { assert ($1 == 532); $$ = $1; } | n533 "533" { assert ($1 == 533); $$ = $1; } | n534 "534" { assert ($1 == 534); $$ = $1; } | n535 "535" { assert ($1 == 535); $$ = $1; } | n536 "536" { assert ($1 == 536); $$ = $1; } | n537 "537" { assert ($1 == 537); $$ = $1; } | n538 "538" { assert ($1 == 538); $$ = $1; } | n539 "539" { assert ($1 == 539); $$ = $1; } | n540 "540" { assert ($1 == 540); $$ = $1; } | n541 "541" { assert ($1 == 541); $$ = $1; } | n542 "542" { assert ($1 == 542); $$ = $1; } | n543 "543" { assert ($1 == 543); $$ = $1; } | n544 "544" { assert ($1 == 544); $$ = $1; } | n545 "545" { assert ($1 == 545); $$ = $1; } | n546 "546" { assert ($1 == 546); $$ = $1; } | n547 "547" { assert ($1 == 547); $$ = $1; } | n548 "548" { assert ($1 == 548); $$ = $1; } | n549 "549" { assert ($1 == 549); $$ = $1; } | n550 "550" { assert ($1 == 550); $$ = $1; } | n551 "551" { assert ($1 == 551); $$ = $1; } | n552 "552" { assert ($1 == 552); $$ = $1; } | n553 "553" { assert ($1 == 553); $$ = $1; } | n554 "554" { assert ($1 == 554); $$ = $1; } | n555 "555" { assert ($1 == 555); $$ = $1; } | n556 "556" { assert ($1 == 556); $$ = $1; } | n557 "557" { assert ($1 == 557); $$ = $1; } | n558 "558" { assert ($1 == 558); $$ = $1; } | n559 "559" { assert ($1 == 559); $$ = $1; } | n560 "560" { assert ($1 == 560); $$ = $1; } | n561 "561" { assert ($1 == 561); $$ = $1; } | n562 "562" { assert ($1 == 562); $$ = $1; } | n563 "563" { assert ($1 == 563); $$ = $1; } | n564 "564" { assert ($1 == 564); $$ = $1; } | n565 "565" { assert ($1 == 565); $$ = $1; } | n566 "566" { assert ($1 == 566); $$ = $1; } | n567 "567" { assert ($1 == 567); $$ = $1; } | n568 "568" { assert ($1 == 568); $$ = $1; } | n569 "569" { assert ($1 == 569); $$ = $1; } | n570 "570" { assert ($1 == 570); $$ = $1; } | n571 "571" { assert ($1 == 571); $$ = $1; } | n572 "572" { assert ($1 == 572); $$ = $1; } | n573 "573" { assert ($1 == 573); $$ = $1; } | n574 "574" { assert ($1 == 574); $$ = $1; } | n575 "575" { assert ($1 == 575); $$ = $1; } | n576 "576" { assert ($1 == 576); $$ = $1; } | n577 "577" { assert ($1 == 577); $$ = $1; } | n578 "578" { assert ($1 == 578); $$ = $1; } | n579 "579" { assert ($1 == 579); $$ = $1; } | n580 "580" { assert ($1 == 580); $$ = $1; } | n581 "581" { assert ($1 == 581); $$ = $1; } | n582 "582" { assert ($1 == 582); $$ = $1; } | n583 "583" { assert ($1 == 583); $$ = $1; } | n584 "584" { assert ($1 == 584); $$ = $1; } | n585 "585" { assert ($1 == 585); $$ = $1; } | n586 "586" { assert ($1 == 586); $$ = $1; } | n587 "587" { assert ($1 == 587); $$ = $1; } | n588 "588" { assert ($1 == 588); $$ = $1; } | n589 "589" { assert ($1 == 589); $$ = $1; } | n590 "590" { assert ($1 == 590); $$ = $1; } | n591 "591" { assert ($1 == 591); $$ = $1; } | n592 "592" { assert ($1 == 592); $$ = $1; } | n593 "593" { assert ($1 == 593); $$ = $1; } | n594 "594" { assert ($1 == 594); $$ = $1; } | n595 "595" { assert ($1 == 595); $$ = $1; } | n596 "596" { assert ($1 == 596); $$ = $1; } | n597 "597" { assert ($1 == 597); $$ = $1; } | n598 "598" { assert ($1 == 598); $$ = $1; } | n599 "599" { assert ($1 == 599); $$ = $1; } | n600 "600" { assert ($1 == 600); $$ = $1; } | n601 "601" { assert ($1 == 601); $$ = $1; } | n602 "602" { assert ($1 == 602); $$ = $1; } | n603 "603" { assert ($1 == 603); $$ = $1; } | n604 "604" { assert ($1 == 604); $$ = $1; } | n605 "605" { assert ($1 == 605); $$ = $1; } | n606 "606" { assert ($1 == 606); $$ = $1; } | n607 "607" { assert ($1 == 607); $$ = $1; } | n608 "608" { assert ($1 == 608); $$ = $1; } | n609 "609" { assert ($1 == 609); $$ = $1; } | n610 "610" { assert ($1 == 610); $$ = $1; } | n611 "611" { assert ($1 == 611); $$ = $1; } | n612 "612" { assert ($1 == 612); $$ = $1; } | n613 "613" { assert ($1 == 613); $$ = $1; } | n614 "614" { assert ($1 == 614); $$ = $1; } | n615 "615" { assert ($1 == 615); $$ = $1; } | n616 "616" { assert ($1 == 616); $$ = $1; } | n617 "617" { assert ($1 == 617); $$ = $1; } | n618 "618" { assert ($1 == 618); $$ = $1; } | n619 "619" { assert ($1 == 619); $$ = $1; } | n620 "620" { assert ($1 == 620); $$ = $1; } | n621 "621" { assert ($1 == 621); $$ = $1; } | n622 "622" { assert ($1 == 622); $$ = $1; } | n623 "623" { assert ($1 == 623); $$ = $1; } | n624 "624" { assert ($1 == 624); $$ = $1; } | n625 "625" { assert ($1 == 625); $$ = $1; } | n626 "626" { assert ($1 == 626); $$ = $1; } | n627 "627" { assert ($1 == 627); $$ = $1; } | n628 "628" { assert ($1 == 628); $$ = $1; } | n629 "629" { assert ($1 == 629); $$ = $1; } | n630 "630" { assert ($1 == 630); $$ = $1; } | n631 "631" { assert ($1 == 631); $$ = $1; } | n632 "632" { assert ($1 == 632); $$ = $1; } | n633 "633" { assert ($1 == 633); $$ = $1; } | n634 "634" { assert ($1 == 634); $$ = $1; } | n635 "635" { assert ($1 == 635); $$ = $1; } | n636 "636" { assert ($1 == 636); $$ = $1; } | n637 "637" { assert ($1 == 637); $$ = $1; } | n638 "638" { assert ($1 == 638); $$ = $1; } | n639 "639" { assert ($1 == 639); $$ = $1; } | n640 "640" { assert ($1 == 640); $$ = $1; } | n641 "641" { assert ($1 == 641); $$ = $1; } | n642 "642" { assert ($1 == 642); $$ = $1; } | n643 "643" { assert ($1 == 643); $$ = $1; } | n644 "644" { assert ($1 == 644); $$ = $1; } | n645 "645" { assert ($1 == 645); $$ = $1; } | n646 "646" { assert ($1 == 646); $$ = $1; } | n647 "647" { assert ($1 == 647); $$ = $1; } | n648 "648" { assert ($1 == 648); $$ = $1; } | n649 "649" { assert ($1 == 649); $$ = $1; } | n650 "650" { assert ($1 == 650); $$ = $1; } | n651 "651" { assert ($1 == 651); $$ = $1; } | n652 "652" { assert ($1 == 652); $$ = $1; } | n653 "653" { assert ($1 == 653); $$ = $1; } | n654 "654" { assert ($1 == 654); $$ = $1; } | n655 "655" { assert ($1 == 655); $$ = $1; } | n656 "656" { assert ($1 == 656); $$ = $1; } | n657 "657" { assert ($1 == 657); $$ = $1; } | n658 "658" { assert ($1 == 658); $$ = $1; } | n659 "659" { assert ($1 == 659); $$ = $1; } | n660 "660" { assert ($1 == 660); $$ = $1; } | n661 "661" { assert ($1 == 661); $$ = $1; } | n662 "662" { assert ($1 == 662); $$ = $1; } | n663 "663" { assert ($1 == 663); $$ = $1; } | n664 "664" { assert ($1 == 664); $$ = $1; } | n665 "665" { assert ($1 == 665); $$ = $1; } | n666 "666" { assert ($1 == 666); $$ = $1; } | n667 "667" { assert ($1 == 667); $$ = $1; } | n668 "668" { assert ($1 == 668); $$ = $1; } | n669 "669" { assert ($1 == 669); $$ = $1; } | n670 "670" { assert ($1 == 670); $$ = $1; } | n671 "671" { assert ($1 == 671); $$ = $1; } | n672 "672" { assert ($1 == 672); $$ = $1; } | n673 "673" { assert ($1 == 673); $$ = $1; } | n674 "674" { assert ($1 == 674); $$ = $1; } | n675 "675" { assert ($1 == 675); $$ = $1; } | n676 "676" { assert ($1 == 676); $$ = $1; } | n677 "677" { assert ($1 == 677); $$ = $1; } | n678 "678" { assert ($1 == 678); $$ = $1; } | n679 "679" { assert ($1 == 679); $$ = $1; } | n680 "680" { assert ($1 == 680); $$ = $1; } | n681 "681" { assert ($1 == 681); $$ = $1; } | n682 "682" { assert ($1 == 682); $$ = $1; } | n683 "683" { assert ($1 == 683); $$ = $1; } | n684 "684" { assert ($1 == 684); $$ = $1; } | n685 "685" { assert ($1 == 685); $$ = $1; } | n686 "686" { assert ($1 == 686); $$ = $1; } | n687 "687" { assert ($1 == 687); $$ = $1; } | n688 "688" { assert ($1 == 688); $$ = $1; } | n689 "689" { assert ($1 == 689); $$ = $1; } | n690 "690" { assert ($1 == 690); $$ = $1; } | n691 "691" { assert ($1 == 691); $$ = $1; } | n692 "692" { assert ($1 == 692); $$ = $1; } | n693 "693" { assert ($1 == 693); $$ = $1; } | n694 "694" { assert ($1 == 694); $$ = $1; } | n695 "695" { assert ($1 == 695); $$ = $1; } | n696 "696" { assert ($1 == 696); $$ = $1; } | n697 "697" { assert ($1 == 697); $$ = $1; } | n698 "698" { assert ($1 == 698); $$ = $1; } | n699 "699" { assert ($1 == 699); $$ = $1; } | n700 "700" { assert ($1 == 700); $$ = $1; } | n701 "701" { assert ($1 == 701); $$ = $1; } | n702 "702" { assert ($1 == 702); $$ = $1; } | n703 "703" { assert ($1 == 703); $$ = $1; } | n704 "704" { assert ($1 == 704); $$ = $1; } | n705 "705" { assert ($1 == 705); $$ = $1; } | n706 "706" { assert ($1 == 706); $$ = $1; } | n707 "707" { assert ($1 == 707); $$ = $1; } | n708 "708" { assert ($1 == 708); $$ = $1; } | n709 "709" { assert ($1 == 709); $$ = $1; } | n710 "710" { assert ($1 == 710); $$ = $1; } | n711 "711" { assert ($1 == 711); $$ = $1; } | n712 "712" { assert ($1 == 712); $$ = $1; } | n713 "713" { assert ($1 == 713); $$ = $1; } | n714 "714" { assert ($1 == 714); $$ = $1; } | n715 "715" { assert ($1 == 715); $$ = $1; } | n716 "716" { assert ($1 == 716); $$ = $1; } | n717 "717" { assert ($1 == 717); $$ = $1; } | n718 "718" { assert ($1 == 718); $$ = $1; } | n719 "719" { assert ($1 == 719); $$ = $1; } | n720 "720" { assert ($1 == 720); $$ = $1; } | n721 "721" { assert ($1 == 721); $$ = $1; } | n722 "722" { assert ($1 == 722); $$ = $1; } | n723 "723" { assert ($1 == 723); $$ = $1; } | n724 "724" { assert ($1 == 724); $$ = $1; } | n725 "725" { assert ($1 == 725); $$ = $1; } | n726 "726" { assert ($1 == 726); $$ = $1; } | n727 "727" { assert ($1 == 727); $$ = $1; } | n728 "728" { assert ($1 == 728); $$ = $1; } | n729 "729" { assert ($1 == 729); $$ = $1; } | n730 "730" { assert ($1 == 730); $$ = $1; } | n731 "731" { assert ($1 == 731); $$ = $1; } | n732 "732" { assert ($1 == 732); $$ = $1; } | n733 "733" { assert ($1 == 733); $$ = $1; } | n734 "734" { assert ($1 == 734); $$ = $1; } | n735 "735" { assert ($1 == 735); $$ = $1; } | n736 "736" { assert ($1 == 736); $$ = $1; } | n737 "737" { assert ($1 == 737); $$ = $1; } | n738 "738" { assert ($1 == 738); $$ = $1; } | n739 "739" { assert ($1 == 739); $$ = $1; } | n740 "740" { assert ($1 == 740); $$ = $1; } | n741 "741" { assert ($1 == 741); $$ = $1; } | n742 "742" { assert ($1 == 742); $$ = $1; } | n743 "743" { assert ($1 == 743); $$ = $1; } | n744 "744" { assert ($1 == 744); $$ = $1; } | n745 "745" { assert ($1 == 745); $$ = $1; } | n746 "746" { assert ($1 == 746); $$ = $1; } | n747 "747" { assert ($1 == 747); $$ = $1; } | n748 "748" { assert ($1 == 748); $$ = $1; } | n749 "749" { assert ($1 == 749); $$ = $1; } | n750 "750" { assert ($1 == 750); $$ = $1; } | n751 "751" { assert ($1 == 751); $$ = $1; } | n752 "752" { assert ($1 == 752); $$ = $1; } | n753 "753" { assert ($1 == 753); $$ = $1; } | n754 "754" { assert ($1 == 754); $$ = $1; } | n755 "755" { assert ($1 == 755); $$ = $1; } | n756 "756" { assert ($1 == 756); $$ = $1; } | n757 "757" { assert ($1 == 757); $$ = $1; } | n758 "758" { assert ($1 == 758); $$ = $1; } | n759 "759" { assert ($1 == 759); $$ = $1; } | n760 "760" { assert ($1 == 760); $$ = $1; } | n761 "761" { assert ($1 == 761); $$ = $1; } | n762 "762" { assert ($1 == 762); $$ = $1; } | n763 "763" { assert ($1 == 763); $$ = $1; } | n764 "764" { assert ($1 == 764); $$ = $1; } | n765 "765" { assert ($1 == 765); $$ = $1; } | n766 "766" { assert ($1 == 766); $$ = $1; } | n767 "767" { assert ($1 == 767); $$ = $1; } | n768 "768" { assert ($1 == 768); $$ = $1; } | n769 "769" { assert ($1 == 769); $$ = $1; } | n770 "770" { assert ($1 == 770); $$ = $1; } | n771 "771" { assert ($1 == 771); $$ = $1; } | n772 "772" { assert ($1 == 772); $$ = $1; } | n773 "773" { assert ($1 == 773); $$ = $1; } | n774 "774" { assert ($1 == 774); $$ = $1; } | n775 "775" { assert ($1 == 775); $$ = $1; } | n776 "776" { assert ($1 == 776); $$ = $1; } | n777 "777" { assert ($1 == 777); $$ = $1; } | n778 "778" { assert ($1 == 778); $$ = $1; } | n779 "779" { assert ($1 == 779); $$ = $1; } | n780 "780" { assert ($1 == 780); $$ = $1; } | n781 "781" { assert ($1 == 781); $$ = $1; } | n782 "782" { assert ($1 == 782); $$ = $1; } | n783 "783" { assert ($1 == 783); $$ = $1; } | n784 "784" { assert ($1 == 784); $$ = $1; } | n785 "785" { assert ($1 == 785); $$ = $1; } | n786 "786" { assert ($1 == 786); $$ = $1; } | n787 "787" { assert ($1 == 787); $$ = $1; } | n788 "788" { assert ($1 == 788); $$ = $1; } | n789 "789" { assert ($1 == 789); $$ = $1; } | n790 "790" { assert ($1 == 790); $$ = $1; } | n791 "791" { assert ($1 == 791); $$ = $1; } | n792 "792" { assert ($1 == 792); $$ = $1; } | n793 "793" { assert ($1 == 793); $$ = $1; } | n794 "794" { assert ($1 == 794); $$ = $1; } | n795 "795" { assert ($1 == 795); $$ = $1; } | n796 "796" { assert ($1 == 796); $$ = $1; } | n797 "797" { assert ($1 == 797); $$ = $1; } | n798 "798" { assert ($1 == 798); $$ = $1; } | n799 "799" { assert ($1 == 799); $$ = $1; } | n800 "800" { assert ($1 == 800); $$ = $1; } | n801 "801" { assert ($1 == 801); $$ = $1; } | n802 "802" { assert ($1 == 802); $$ = $1; } | n803 "803" { assert ($1 == 803); $$ = $1; } | n804 "804" { assert ($1 == 804); $$ = $1; } | n805 "805" { assert ($1 == 805); $$ = $1; } | n806 "806" { assert ($1 == 806); $$ = $1; } | n807 "807" { assert ($1 == 807); $$ = $1; } | n808 "808" { assert ($1 == 808); $$ = $1; } | n809 "809" { assert ($1 == 809); $$ = $1; } | n810 "810" { assert ($1 == 810); $$ = $1; } | n811 "811" { assert ($1 == 811); $$ = $1; } | n812 "812" { assert ($1 == 812); $$ = $1; } | n813 "813" { assert ($1 == 813); $$ = $1; } | n814 "814" { assert ($1 == 814); $$ = $1; } | n815 "815" { assert ($1 == 815); $$ = $1; } | n816 "816" { assert ($1 == 816); $$ = $1; } | n817 "817" { assert ($1 == 817); $$ = $1; } | n818 "818" { assert ($1 == 818); $$ = $1; } | n819 "819" { assert ($1 == 819); $$ = $1; } | n820 "820" { assert ($1 == 820); $$ = $1; } | n821 "821" { assert ($1 == 821); $$ = $1; } | n822 "822" { assert ($1 == 822); $$ = $1; } | n823 "823" { assert ($1 == 823); $$ = $1; } | n824 "824" { assert ($1 == 824); $$ = $1; } | n825 "825" { assert ($1 == 825); $$ = $1; } | n826 "826" { assert ($1 == 826); $$ = $1; } | n827 "827" { assert ($1 == 827); $$ = $1; } | n828 "828" { assert ($1 == 828); $$ = $1; } | n829 "829" { assert ($1 == 829); $$ = $1; } | n830 "830" { assert ($1 == 830); $$ = $1; } | n831 "831" { assert ($1 == 831); $$ = $1; } | n832 "832" { assert ($1 == 832); $$ = $1; } | n833 "833" { assert ($1 == 833); $$ = $1; } | n834 "834" { assert ($1 == 834); $$ = $1; } | n835 "835" { assert ($1 == 835); $$ = $1; } | n836 "836" { assert ($1 == 836); $$ = $1; } | n837 "837" { assert ($1 == 837); $$ = $1; } | n838 "838" { assert ($1 == 838); $$ = $1; } | n839 "839" { assert ($1 == 839); $$ = $1; } | n840 "840" { assert ($1 == 840); $$ = $1; } | n841 "841" { assert ($1 == 841); $$ = $1; } | n842 "842" { assert ($1 == 842); $$ = $1; } | n843 "843" { assert ($1 == 843); $$ = $1; } | n844 "844" { assert ($1 == 844); $$ = $1; } | n845 "845" { assert ($1 == 845); $$ = $1; } | n846 "846" { assert ($1 == 846); $$ = $1; } | n847 "847" { assert ($1 == 847); $$ = $1; } | n848 "848" { assert ($1 == 848); $$ = $1; } | n849 "849" { assert ($1 == 849); $$ = $1; } | n850 "850" { assert ($1 == 850); $$ = $1; } | n851 "851" { assert ($1 == 851); $$ = $1; } | n852 "852" { assert ($1 == 852); $$ = $1; } | n853 "853" { assert ($1 == 853); $$ = $1; } | n854 "854" { assert ($1 == 854); $$ = $1; } | n855 "855" { assert ($1 == 855); $$ = $1; } | n856 "856" { assert ($1 == 856); $$ = $1; } | n857 "857" { assert ($1 == 857); $$ = $1; } | n858 "858" { assert ($1 == 858); $$ = $1; } | n859 "859" { assert ($1 == 859); $$ = $1; } | n860 "860" { assert ($1 == 860); $$ = $1; } | n861 "861" { assert ($1 == 861); $$ = $1; } | n862 "862" { assert ($1 == 862); $$ = $1; } | n863 "863" { assert ($1 == 863); $$ = $1; } | n864 "864" { assert ($1 == 864); $$ = $1; } | n865 "865" { assert ($1 == 865); $$ = $1; } | n866 "866" { assert ($1 == 866); $$ = $1; } | n867 "867" { assert ($1 == 867); $$ = $1; } | n868 "868" { assert ($1 == 868); $$ = $1; } | n869 "869" { assert ($1 == 869); $$ = $1; } | n870 "870" { assert ($1 == 870); $$ = $1; } | n871 "871" { assert ($1 == 871); $$ = $1; } | n872 "872" { assert ($1 == 872); $$ = $1; } | n873 "873" { assert ($1 == 873); $$ = $1; } | n874 "874" { assert ($1 == 874); $$ = $1; } | n875 "875" { assert ($1 == 875); $$ = $1; } | n876 "876" { assert ($1 == 876); $$ = $1; } | n877 "877" { assert ($1 == 877); $$ = $1; } | n878 "878" { assert ($1 == 878); $$ = $1; } | n879 "879" { assert ($1 == 879); $$ = $1; } | n880 "880" { assert ($1 == 880); $$ = $1; } | n881 "881" { assert ($1 == 881); $$ = $1; } | n882 "882" { assert ($1 == 882); $$ = $1; } | n883 "883" { assert ($1 == 883); $$ = $1; } | n884 "884" { assert ($1 == 884); $$ = $1; } | n885 "885" { assert ($1 == 885); $$ = $1; } | n886 "886" { assert ($1 == 886); $$ = $1; } | n887 "887" { assert ($1 == 887); $$ = $1; } | n888 "888" { assert ($1 == 888); $$ = $1; } | n889 "889" { assert ($1 == 889); $$ = $1; } | n890 "890" { assert ($1 == 890); $$ = $1; } | n891 "891" { assert ($1 == 891); $$ = $1; } | n892 "892" { assert ($1 == 892); $$ = $1; } | n893 "893" { assert ($1 == 893); $$ = $1; } | n894 "894" { assert ($1 == 894); $$ = $1; } | n895 "895" { assert ($1 == 895); $$ = $1; } | n896 "896" { assert ($1 == 896); $$ = $1; } | n897 "897" { assert ($1 == 897); $$ = $1; } | n898 "898" { assert ($1 == 898); $$ = $1; } | n899 "899" { assert ($1 == 899); $$ = $1; } | n900 "900" { assert ($1 == 900); $$ = $1; } | n901 "901" { assert ($1 == 901); $$ = $1; } | n902 "902" { assert ($1 == 902); $$ = $1; } | n903 "903" { assert ($1 == 903); $$ = $1; } | n904 "904" { assert ($1 == 904); $$ = $1; } | n905 "905" { assert ($1 == 905); $$ = $1; } | n906 "906" { assert ($1 == 906); $$ = $1; } | n907 "907" { assert ($1 == 907); $$ = $1; } | n908 "908" { assert ($1 == 908); $$ = $1; } | n909 "909" { assert ($1 == 909); $$ = $1; } | n910 "910" { assert ($1 == 910); $$ = $1; } | n911 "911" { assert ($1 == 911); $$ = $1; } | n912 "912" { assert ($1 == 912); $$ = $1; } | n913 "913" { assert ($1 == 913); $$ = $1; } | n914 "914" { assert ($1 == 914); $$ = $1; } | n915 "915" { assert ($1 == 915); $$ = $1; } | n916 "916" { assert ($1 == 916); $$ = $1; } | n917 "917" { assert ($1 == 917); $$ = $1; } | n918 "918" { assert ($1 == 918); $$ = $1; } | n919 "919" { assert ($1 == 919); $$ = $1; } | n920 "920" { assert ($1 == 920); $$ = $1; } | n921 "921" { assert ($1 == 921); $$ = $1; } | n922 "922" { assert ($1 == 922); $$ = $1; } | n923 "923" { assert ($1 == 923); $$ = $1; } | n924 "924" { assert ($1 == 924); $$ = $1; } | n925 "925" { assert ($1 == 925); $$ = $1; } | n926 "926" { assert ($1 == 926); $$ = $1; } | n927 "927" { assert ($1 == 927); $$ = $1; } | n928 "928" { assert ($1 == 928); $$ = $1; } | n929 "929" { assert ($1 == 929); $$ = $1; } | n930 "930" { assert ($1 == 930); $$ = $1; } | n931 "931" { assert ($1 == 931); $$ = $1; } | n932 "932" { assert ($1 == 932); $$ = $1; } | n933 "933" { assert ($1 == 933); $$ = $1; } | n934 "934" { assert ($1 == 934); $$ = $1; } | n935 "935" { assert ($1 == 935); $$ = $1; } | n936 "936" { assert ($1 == 936); $$ = $1; } | n937 "937" { assert ($1 == 937); $$ = $1; } | n938 "938" { assert ($1 == 938); $$ = $1; } | n939 "939" { assert ($1 == 939); $$ = $1; } | n940 "940" { assert ($1 == 940); $$ = $1; } | n941 "941" { assert ($1 == 941); $$ = $1; } | n942 "942" { assert ($1 == 942); $$ = $1; } | n943 "943" { assert ($1 == 943); $$ = $1; } | n944 "944" { assert ($1 == 944); $$ = $1; } | n945 "945" { assert ($1 == 945); $$ = $1; } | n946 "946" { assert ($1 == 946); $$ = $1; } | n947 "947" { assert ($1 == 947); $$ = $1; } | n948 "948" { assert ($1 == 948); $$ = $1; } | n949 "949" { assert ($1 == 949); $$ = $1; } | n950 "950" { assert ($1 == 950); $$ = $1; } | n951 "951" { assert ($1 == 951); $$ = $1; } | n952 "952" { assert ($1 == 952); $$ = $1; } | n953 "953" { assert ($1 == 953); $$ = $1; } | n954 "954" { assert ($1 == 954); $$ = $1; } | n955 "955" { assert ($1 == 955); $$ = $1; } | n956 "956" { assert ($1 == 956); $$ = $1; } | n957 "957" { assert ($1 == 957); $$ = $1; } | n958 "958" { assert ($1 == 958); $$ = $1; } | n959 "959" { assert ($1 == 959); $$ = $1; } | n960 "960" { assert ($1 == 960); $$ = $1; } | n961 "961" { assert ($1 == 961); $$ = $1; } | n962 "962" { assert ($1 == 962); $$ = $1; } | n963 "963" { assert ($1 == 963); $$ = $1; } | n964 "964" { assert ($1 == 964); $$ = $1; } | n965 "965" { assert ($1 == 965); $$ = $1; } | n966 "966" { assert ($1 == 966); $$ = $1; } | n967 "967" { assert ($1 == 967); $$ = $1; } | n968 "968" { assert ($1 == 968); $$ = $1; } | n969 "969" { assert ($1 == 969); $$ = $1; } | n970 "970" { assert ($1 == 970); $$ = $1; } | n971 "971" { assert ($1 == 971); $$ = $1; } | n972 "972" { assert ($1 == 972); $$ = $1; } | n973 "973" { assert ($1 == 973); $$ = $1; } | n974 "974" { assert ($1 == 974); $$ = $1; } | n975 "975" { assert ($1 == 975); $$ = $1; } | n976 "976" { assert ($1 == 976); $$ = $1; } | n977 "977" { assert ($1 == 977); $$ = $1; } | n978 "978" { assert ($1 == 978); $$ = $1; } | n979 "979" { assert ($1 == 979); $$ = $1; } | n980 "980" { assert ($1 == 980); $$ = $1; } | n981 "981" { assert ($1 == 981); $$ = $1; } | n982 "982" { assert ($1 == 982); $$ = $1; } | n983 "983" { assert ($1 == 983); $$ = $1; } | n984 "984" { assert ($1 == 984); $$ = $1; } | n985 "985" { assert ($1 == 985); $$ = $1; } | n986 "986" { assert ($1 == 986); $$ = $1; } | n987 "987" { assert ($1 == 987); $$ = $1; } | n988 "988" { assert ($1 == 988); $$ = $1; } | n989 "989" { assert ($1 == 989); $$ = $1; } | n990 "990" { assert ($1 == 990); $$ = $1; } | n991 "991" { assert ($1 == 991); $$ = $1; } | n992 "992" { assert ($1 == 992); $$ = $1; } | n993 "993" { assert ($1 == 993); $$ = $1; } | n994 "994" { assert ($1 == 994); $$ = $1; } | n995 "995" { assert ($1 == 995); $$ = $1; } | n996 "996" { assert ($1 == 996); $$ = $1; } | n997 "997" { assert ($1 == 997); $$ = $1; } | n998 "998" { assert ($1 == 998); $$ = $1; } | n999 "999" { assert ($1 == 999); $$ = $1; } | n1000 "1000" { assert ($1 == 1000); $$ = $1; } ; n1: token { $$ = 1; }; n2: token { $$ = 2; }; n3: token { $$ = 3; }; n4: token { $$ = 4; }; n5: token { $$ = 5; }; n6: token { $$ = 6; }; n7: token { $$ = 7; }; n8: token { $$ = 8; }; n9: token { $$ = 9; }; n10: token { $$ = 10; }; n11: token { $$ = 11; }; n12: token { $$ = 12; }; n13: token { $$ = 13; }; n14: token { $$ = 14; }; n15: token { $$ = 15; }; n16: token { $$ = 16; }; n17: token { $$ = 17; }; n18: token { $$ = 18; }; n19: token { $$ = 19; }; n20: token { $$ = 20; }; n21: token { $$ = 21; }; n22: token { $$ = 22; }; n23: token { $$ = 23; }; n24: token { $$ = 24; }; n25: token { $$ = 25; }; n26: token { $$ = 26; }; n27: token { $$ = 27; }; n28: token { $$ = 28; }; n29: token { $$ = 29; }; n30: token { $$ = 30; }; n31: token { $$ = 31; }; n32: token { $$ = 32; }; n33: token { $$ = 33; }; n34: token { $$ = 34; }; n35: token { $$ = 35; }; n36: token { $$ = 36; }; n37: token { $$ = 37; }; n38: token { $$ = 38; }; n39: token { $$ = 39; }; n40: token { $$ = 40; }; n41: token { $$ = 41; }; n42: token { $$ = 42; }; n43: token { $$ = 43; }; n44: token { $$ = 44; }; n45: token { $$ = 45; }; n46: token { $$ = 46; }; n47: token { $$ = 47; }; n48: token { $$ = 48; }; n49: token { $$ = 49; }; n50: token { $$ = 50; }; n51: token { $$ = 51; }; n52: token { $$ = 52; }; n53: token { $$ = 53; }; n54: token { $$ = 54; }; n55: token { $$ = 55; }; n56: token { $$ = 56; }; n57: token { $$ = 57; }; n58: token { $$ = 58; }; n59: token { $$ = 59; }; n60: token { $$ = 60; }; n61: token { $$ = 61; }; n62: token { $$ = 62; }; n63: token { $$ = 63; }; n64: token { $$ = 64; }; n65: token { $$ = 65; }; n66: token { $$ = 66; }; n67: token { $$ = 67; }; n68: token { $$ = 68; }; n69: token { $$ = 69; }; n70: token { $$ = 70; }; n71: token { $$ = 71; }; n72: token { $$ = 72; }; n73: token { $$ = 73; }; n74: token { $$ = 74; }; n75: token { $$ = 75; }; n76: token { $$ = 76; }; n77: token { $$ = 77; }; n78: token { $$ = 78; }; n79: token { $$ = 79; }; n80: token { $$ = 80; }; n81: token { $$ = 81; }; n82: token { $$ = 82; }; n83: token { $$ = 83; }; n84: token { $$ = 84; }; n85: token { $$ = 85; }; n86: token { $$ = 86; }; n87: token { $$ = 87; }; n88: token { $$ = 88; }; n89: token { $$ = 89; }; n90: token { $$ = 90; }; n91: token { $$ = 91; }; n92: token { $$ = 92; }; n93: token { $$ = 93; }; n94: token { $$ = 94; }; n95: token { $$ = 95; }; n96: token { $$ = 96; }; n97: token { $$ = 97; }; n98: token { $$ = 98; }; n99: token { $$ = 99; }; n100: token { $$ = 100; }; n101: token { $$ = 101; }; n102: token { $$ = 102; }; n103: token { $$ = 103; }; n104: token { $$ = 104; }; n105: token { $$ = 105; }; n106: token { $$ = 106; }; n107: token { $$ = 107; }; n108: token { $$ = 108; }; n109: token { $$ = 109; }; n110: token { $$ = 110; }; n111: token { $$ = 111; }; n112: token { $$ = 112; }; n113: token { $$ = 113; }; n114: token { $$ = 114; }; n115: token { $$ = 115; }; n116: token { $$ = 116; }; n117: token { $$ = 117; }; n118: token { $$ = 118; }; n119: token { $$ = 119; }; n120: token { $$ = 120; }; n121: token { $$ = 121; }; n122: token { $$ = 122; }; n123: token { $$ = 123; }; n124: token { $$ = 124; }; n125: token { $$ = 125; }; n126: token { $$ = 126; }; n127: token { $$ = 127; }; n128: token { $$ = 128; }; n129: token { $$ = 129; }; n130: token { $$ = 130; }; n131: token { $$ = 131; }; n132: token { $$ = 132; }; n133: token { $$ = 133; }; n134: token { $$ = 134; }; n135: token { $$ = 135; }; n136: token { $$ = 136; }; n137: token { $$ = 137; }; n138: token { $$ = 138; }; n139: token { $$ = 139; }; n140: token { $$ = 140; }; n141: token { $$ = 141; }; n142: token { $$ = 142; }; n143: token { $$ = 143; }; n144: token { $$ = 144; }; n145: token { $$ = 145; }; n146: token { $$ = 146; }; n147: token { $$ = 147; }; n148: token { $$ = 148; }; n149: token { $$ = 149; }; n150: token { $$ = 150; }; n151: token { $$ = 151; }; n152: token { $$ = 152; }; n153: token { $$ = 153; }; n154: token { $$ = 154; }; n155: token { $$ = 155; }; n156: token { $$ = 156; }; n157: token { $$ = 157; }; n158: token { $$ = 158; }; n159: token { $$ = 159; }; n160: token { $$ = 160; }; n161: token { $$ = 161; }; n162: token { $$ = 162; }; n163: token { $$ = 163; }; n164: token { $$ = 164; }; n165: token { $$ = 165; }; n166: token { $$ = 166; }; n167: token { $$ = 167; }; n168: token { $$ = 168; }; n169: token { $$ = 169; }; n170: token { $$ = 170; }; n171: token { $$ = 171; }; n172: token { $$ = 172; }; n173: token { $$ = 173; }; n174: token { $$ = 174; }; n175: token { $$ = 175; }; n176: token { $$ = 176; }; n177: token { $$ = 177; }; n178: token { $$ = 178; }; n179: token { $$ = 179; }; n180: token { $$ = 180; }; n181: token { $$ = 181; }; n182: token { $$ = 182; }; n183: token { $$ = 183; }; n184: token { $$ = 184; }; n185: token { $$ = 185; }; n186: token { $$ = 186; }; n187: token { $$ = 187; }; n188: token { $$ = 188; }; n189: token { $$ = 189; }; n190: token { $$ = 190; }; n191: token { $$ = 191; }; n192: token { $$ = 192; }; n193: token { $$ = 193; }; n194: token { $$ = 194; }; n195: token { $$ = 195; }; n196: token { $$ = 196; }; n197: token { $$ = 197; }; n198: token { $$ = 198; }; n199: token { $$ = 199; }; n200: token { $$ = 200; }; n201: token { $$ = 201; }; n202: token { $$ = 202; }; n203: token { $$ = 203; }; n204: token { $$ = 204; }; n205: token { $$ = 205; }; n206: token { $$ = 206; }; n207: token { $$ = 207; }; n208: token { $$ = 208; }; n209: token { $$ = 209; }; n210: token { $$ = 210; }; n211: token { $$ = 211; }; n212: token { $$ = 212; }; n213: token { $$ = 213; }; n214: token { $$ = 214; }; n215: token { $$ = 215; }; n216: token { $$ = 216; }; n217: token { $$ = 217; }; n218: token { $$ = 218; }; n219: token { $$ = 219; }; n220: token { $$ = 220; }; n221: token { $$ = 221; }; n222: token { $$ = 222; }; n223: token { $$ = 223; }; n224: token { $$ = 224; }; n225: token { $$ = 225; }; n226: token { $$ = 226; }; n227: token { $$ = 227; }; n228: token { $$ = 228; }; n229: token { $$ = 229; }; n230: token { $$ = 230; }; n231: token { $$ = 231; }; n232: token { $$ = 232; }; n233: token { $$ = 233; }; n234: token { $$ = 234; }; n235: token { $$ = 235; }; n236: token { $$ = 236; }; n237: token { $$ = 237; }; n238: token { $$ = 238; }; n239: token { $$ = 239; }; n240: token { $$ = 240; }; n241: token { $$ = 241; }; n242: token { $$ = 242; }; n243: token { $$ = 243; }; n244: token { $$ = 244; }; n245: token { $$ = 245; }; n246: token { $$ = 246; }; n247: token { $$ = 247; }; n248: token { $$ = 248; }; n249: token { $$ = 249; }; n250: token { $$ = 250; }; n251: token { $$ = 251; }; n252: token { $$ = 252; }; n253: token { $$ = 253; }; n254: token { $$ = 254; }; n255: token { $$ = 255; }; n256: token { $$ = 256; }; n257: token { $$ = 257; }; n258: token { $$ = 258; }; n259: token { $$ = 259; }; n260: token { $$ = 260; }; n261: token { $$ = 261; }; n262: token { $$ = 262; }; n263: token { $$ = 263; }; n264: token { $$ = 264; }; n265: token { $$ = 265; }; n266: token { $$ = 266; }; n267: token { $$ = 267; }; n268: token { $$ = 268; }; n269: token { $$ = 269; }; n270: token { $$ = 270; }; n271: token { $$ = 271; }; n272: token { $$ = 272; }; n273: token { $$ = 273; }; n274: token { $$ = 274; }; n275: token { $$ = 275; }; n276: token { $$ = 276; }; n277: token { $$ = 277; }; n278: token { $$ = 278; }; n279: token { $$ = 279; }; n280: token { $$ = 280; }; n281: token { $$ = 281; }; n282: token { $$ = 282; }; n283: token { $$ = 283; }; n284: token { $$ = 284; }; n285: token { $$ = 285; }; n286: token { $$ = 286; }; n287: token { $$ = 287; }; n288: token { $$ = 288; }; n289: token { $$ = 289; }; n290: token { $$ = 290; }; n291: token { $$ = 291; }; n292: token { $$ = 292; }; n293: token { $$ = 293; }; n294: token { $$ = 294; }; n295: token { $$ = 295; }; n296: token { $$ = 296; }; n297: token { $$ = 297; }; n298: token { $$ = 298; }; n299: token { $$ = 299; }; n300: token { $$ = 300; }; n301: token { $$ = 301; }; n302: token { $$ = 302; }; n303: token { $$ = 303; }; n304: token { $$ = 304; }; n305: token { $$ = 305; }; n306: token { $$ = 306; }; n307: token { $$ = 307; }; n308: token { $$ = 308; }; n309: token { $$ = 309; }; n310: token { $$ = 310; }; n311: token { $$ = 311; }; n312: token { $$ = 312; }; n313: token { $$ = 313; }; n314: token { $$ = 314; }; n315: token { $$ = 315; }; n316: token { $$ = 316; }; n317: token { $$ = 317; }; n318: token { $$ = 318; }; n319: token { $$ = 319; }; n320: token { $$ = 320; }; n321: token { $$ = 321; }; n322: token { $$ = 322; }; n323: token { $$ = 323; }; n324: token { $$ = 324; }; n325: token { $$ = 325; }; n326: token { $$ = 326; }; n327: token { $$ = 327; }; n328: token { $$ = 328; }; n329: token { $$ = 329; }; n330: token { $$ = 330; }; n331: token { $$ = 331; }; n332: token { $$ = 332; }; n333: token { $$ = 333; }; n334: token { $$ = 334; }; n335: token { $$ = 335; }; n336: token { $$ = 336; }; n337: token { $$ = 337; }; n338: token { $$ = 338; }; n339: token { $$ = 339; }; n340: token { $$ = 340; }; n341: token { $$ = 341; }; n342: token { $$ = 342; }; n343: token { $$ = 343; }; n344: token { $$ = 344; }; n345: token { $$ = 345; }; n346: token { $$ = 346; }; n347: token { $$ = 347; }; n348: token { $$ = 348; }; n349: token { $$ = 349; }; n350: token { $$ = 350; }; n351: token { $$ = 351; }; n352: token { $$ = 352; }; n353: token { $$ = 353; }; n354: token { $$ = 354; }; n355: token { $$ = 355; }; n356: token { $$ = 356; }; n357: token { $$ = 357; }; n358: token { $$ = 358; }; n359: token { $$ = 359; }; n360: token { $$ = 360; }; n361: token { $$ = 361; }; n362: token { $$ = 362; }; n363: token { $$ = 363; }; n364: token { $$ = 364; }; n365: token { $$ = 365; }; n366: token { $$ = 366; }; n367: token { $$ = 367; }; n368: token { $$ = 368; }; n369: token { $$ = 369; }; n370: token { $$ = 370; }; n371: token { $$ = 371; }; n372: token { $$ = 372; }; n373: token { $$ = 373; }; n374: token { $$ = 374; }; n375: token { $$ = 375; }; n376: token { $$ = 376; }; n377: token { $$ = 377; }; n378: token { $$ = 378; }; n379: token { $$ = 379; }; n380: token { $$ = 380; }; n381: token { $$ = 381; }; n382: token { $$ = 382; }; n383: token { $$ = 383; }; n384: token { $$ = 384; }; n385: token { $$ = 385; }; n386: token { $$ = 386; }; n387: token { $$ = 387; }; n388: token { $$ = 388; }; n389: token { $$ = 389; }; n390: token { $$ = 390; }; n391: token { $$ = 391; }; n392: token { $$ = 392; }; n393: token { $$ = 393; }; n394: token { $$ = 394; }; n395: token { $$ = 395; }; n396: token { $$ = 396; }; n397: token { $$ = 397; }; n398: token { $$ = 398; }; n399: token { $$ = 399; }; n400: token { $$ = 400; }; n401: token { $$ = 401; }; n402: token { $$ = 402; }; n403: token { $$ = 403; }; n404: token { $$ = 404; }; n405: token { $$ = 405; }; n406: token { $$ = 406; }; n407: token { $$ = 407; }; n408: token { $$ = 408; }; n409: token { $$ = 409; }; n410: token { $$ = 410; }; n411: token { $$ = 411; }; n412: token { $$ = 412; }; n413: token { $$ = 413; }; n414: token { $$ = 414; }; n415: token { $$ = 415; }; n416: token { $$ = 416; }; n417: token { $$ = 417; }; n418: token { $$ = 418; }; n419: token { $$ = 419; }; n420: token { $$ = 420; }; n421: token { $$ = 421; }; n422: token { $$ = 422; }; n423: token { $$ = 423; }; n424: token { $$ = 424; }; n425: token { $$ = 425; }; n426: token { $$ = 426; }; n427: token { $$ = 427; }; n428: token { $$ = 428; }; n429: token { $$ = 429; }; n430: token { $$ = 430; }; n431: token { $$ = 431; }; n432: token { $$ = 432; }; n433: token { $$ = 433; }; n434: token { $$ = 434; }; n435: token { $$ = 435; }; n436: token { $$ = 436; }; n437: token { $$ = 437; }; n438: token { $$ = 438; }; n439: token { $$ = 439; }; n440: token { $$ = 440; }; n441: token { $$ = 441; }; n442: token { $$ = 442; }; n443: token { $$ = 443; }; n444: token { $$ = 444; }; n445: token { $$ = 445; }; n446: token { $$ = 446; }; n447: token { $$ = 447; }; n448: token { $$ = 448; }; n449: token { $$ = 449; }; n450: token { $$ = 450; }; n451: token { $$ = 451; }; n452: token { $$ = 452; }; n453: token { $$ = 453; }; n454: token { $$ = 454; }; n455: token { $$ = 455; }; n456: token { $$ = 456; }; n457: token { $$ = 457; }; n458: token { $$ = 458; }; n459: token { $$ = 459; }; n460: token { $$ = 460; }; n461: token { $$ = 461; }; n462: token { $$ = 462; }; n463: token { $$ = 463; }; n464: token { $$ = 464; }; n465: token { $$ = 465; }; n466: token { $$ = 466; }; n467: token { $$ = 467; }; n468: token { $$ = 468; }; n469: token { $$ = 469; }; n470: token { $$ = 470; }; n471: token { $$ = 471; }; n472: token { $$ = 472; }; n473: token { $$ = 473; }; n474: token { $$ = 474; }; n475: token { $$ = 475; }; n476: token { $$ = 476; }; n477: token { $$ = 477; }; n478: token { $$ = 478; }; n479: token { $$ = 479; }; n480: token { $$ = 480; }; n481: token { $$ = 481; }; n482: token { $$ = 482; }; n483: token { $$ = 483; }; n484: token { $$ = 484; }; n485: token { $$ = 485; }; n486: token { $$ = 486; }; n487: token { $$ = 487; }; n488: token { $$ = 488; }; n489: token { $$ = 489; }; n490: token { $$ = 490; }; n491: token { $$ = 491; }; n492: token { $$ = 492; }; n493: token { $$ = 493; }; n494: token { $$ = 494; }; n495: token { $$ = 495; }; n496: token { $$ = 496; }; n497: token { $$ = 497; }; n498: token { $$ = 498; }; n499: token { $$ = 499; }; n500: token { $$ = 500; }; n501: token { $$ = 501; }; n502: token { $$ = 502; }; n503: token { $$ = 503; }; n504: token { $$ = 504; }; n505: token { $$ = 505; }; n506: token { $$ = 506; }; n507: token { $$ = 507; }; n508: token { $$ = 508; }; n509: token { $$ = 509; }; n510: token { $$ = 510; }; n511: token { $$ = 511; }; n512: token { $$ = 512; }; n513: token { $$ = 513; }; n514: token { $$ = 514; }; n515: token { $$ = 515; }; n516: token { $$ = 516; }; n517: token { $$ = 517; }; n518: token { $$ = 518; }; n519: token { $$ = 519; }; n520: token { $$ = 520; }; n521: token { $$ = 521; }; n522: token { $$ = 522; }; n523: token { $$ = 523; }; n524: token { $$ = 524; }; n525: token { $$ = 525; }; n526: token { $$ = 526; }; n527: token { $$ = 527; }; n528: token { $$ = 528; }; n529: token { $$ = 529; }; n530: token { $$ = 530; }; n531: token { $$ = 531; }; n532: token { $$ = 532; }; n533: token { $$ = 533; }; n534: token { $$ = 534; }; n535: token { $$ = 535; }; n536: token { $$ = 536; }; n537: token { $$ = 537; }; n538: token { $$ = 538; }; n539: token { $$ = 539; }; n540: token { $$ = 540; }; n541: token { $$ = 541; }; n542: token { $$ = 542; }; n543: token { $$ = 543; }; n544: token { $$ = 544; }; n545: token { $$ = 545; }; n546: token { $$ = 546; }; n547: token { $$ = 547; }; n548: token { $$ = 548; }; n549: token { $$ = 549; }; n550: token { $$ = 550; }; n551: token { $$ = 551; }; n552: token { $$ = 552; }; n553: token { $$ = 553; }; n554: token { $$ = 554; }; n555: token { $$ = 555; }; n556: token { $$ = 556; }; n557: token { $$ = 557; }; n558: token { $$ = 558; }; n559: token { $$ = 559; }; n560: token { $$ = 560; }; n561: token { $$ = 561; }; n562: token { $$ = 562; }; n563: token { $$ = 563; }; n564: token { $$ = 564; }; n565: token { $$ = 565; }; n566: token { $$ = 566; }; n567: token { $$ = 567; }; n568: token { $$ = 568; }; n569: token { $$ = 569; }; n570: token { $$ = 570; }; n571: token { $$ = 571; }; n572: token { $$ = 572; }; n573: token { $$ = 573; }; n574: token { $$ = 574; }; n575: token { $$ = 575; }; n576: token { $$ = 576; }; n577: token { $$ = 577; }; n578: token { $$ = 578; }; n579: token { $$ = 579; }; n580: token { $$ = 580; }; n581: token { $$ = 581; }; n582: token { $$ = 582; }; n583: token { $$ = 583; }; n584: token { $$ = 584; }; n585: token { $$ = 585; }; n586: token { $$ = 586; }; n587: token { $$ = 587; }; n588: token { $$ = 588; }; n589: token { $$ = 589; }; n590: token { $$ = 590; }; n591: token { $$ = 591; }; n592: token { $$ = 592; }; n593: token { $$ = 593; }; n594: token { $$ = 594; }; n595: token { $$ = 595; }; n596: token { $$ = 596; }; n597: token { $$ = 597; }; n598: token { $$ = 598; }; n599: token { $$ = 599; }; n600: token { $$ = 600; }; n601: token { $$ = 601; }; n602: token { $$ = 602; }; n603: token { $$ = 603; }; n604: token { $$ = 604; }; n605: token { $$ = 605; }; n606: token { $$ = 606; }; n607: token { $$ = 607; }; n608: token { $$ = 608; }; n609: token { $$ = 609; }; n610: token { $$ = 610; }; n611: token { $$ = 611; }; n612: token { $$ = 612; }; n613: token { $$ = 613; }; n614: token { $$ = 614; }; n615: token { $$ = 615; }; n616: token { $$ = 616; }; n617: token { $$ = 617; }; n618: token { $$ = 618; }; n619: token { $$ = 619; }; n620: token { $$ = 620; }; n621: token { $$ = 621; }; n622: token { $$ = 622; }; n623: token { $$ = 623; }; n624: token { $$ = 624; }; n625: token { $$ = 625; }; n626: token { $$ = 626; }; n627: token { $$ = 627; }; n628: token { $$ = 628; }; n629: token { $$ = 629; }; n630: token { $$ = 630; }; n631: token { $$ = 631; }; n632: token { $$ = 632; }; n633: token { $$ = 633; }; n634: token { $$ = 634; }; n635: token { $$ = 635; }; n636: token { $$ = 636; }; n637: token { $$ = 637; }; n638: token { $$ = 638; }; n639: token { $$ = 639; }; n640: token { $$ = 640; }; n641: token { $$ = 641; }; n642: token { $$ = 642; }; n643: token { $$ = 643; }; n644: token { $$ = 644; }; n645: token { $$ = 645; }; n646: token { $$ = 646; }; n647: token { $$ = 647; }; n648: token { $$ = 648; }; n649: token { $$ = 649; }; n650: token { $$ = 650; }; n651: token { $$ = 651; }; n652: token { $$ = 652; }; n653: token { $$ = 653; }; n654: token { $$ = 654; }; n655: token { $$ = 655; }; n656: token { $$ = 656; }; n657: token { $$ = 657; }; n658: token { $$ = 658; }; n659: token { $$ = 659; }; n660: token { $$ = 660; }; n661: token { $$ = 661; }; n662: token { $$ = 662; }; n663: token { $$ = 663; }; n664: token { $$ = 664; }; n665: token { $$ = 665; }; n666: token { $$ = 666; }; n667: token { $$ = 667; }; n668: token { $$ = 668; }; n669: token { $$ = 669; }; n670: token { $$ = 670; }; n671: token { $$ = 671; }; n672: token { $$ = 672; }; n673: token { $$ = 673; }; n674: token { $$ = 674; }; n675: token { $$ = 675; }; n676: token { $$ = 676; }; n677: token { $$ = 677; }; n678: token { $$ = 678; }; n679: token { $$ = 679; }; n680: token { $$ = 680; }; n681: token { $$ = 681; }; n682: token { $$ = 682; }; n683: token { $$ = 683; }; n684: token { $$ = 684; }; n685: token { $$ = 685; }; n686: token { $$ = 686; }; n687: token { $$ = 687; }; n688: token { $$ = 688; }; n689: token { $$ = 689; }; n690: token { $$ = 690; }; n691: token { $$ = 691; }; n692: token { $$ = 692; }; n693: token { $$ = 693; }; n694: token { $$ = 694; }; n695: token { $$ = 695; }; n696: token { $$ = 696; }; n697: token { $$ = 697; }; n698: token { $$ = 698; }; n699: token { $$ = 699; }; n700: token { $$ = 700; }; n701: token { $$ = 701; }; n702: token { $$ = 702; }; n703: token { $$ = 703; }; n704: token { $$ = 704; }; n705: token { $$ = 705; }; n706: token { $$ = 706; }; n707: token { $$ = 707; }; n708: token { $$ = 708; }; n709: token { $$ = 709; }; n710: token { $$ = 710; }; n711: token { $$ = 711; }; n712: token { $$ = 712; }; n713: token { $$ = 713; }; n714: token { $$ = 714; }; n715: token { $$ = 715; }; n716: token { $$ = 716; }; n717: token { $$ = 717; }; n718: token { $$ = 718; }; n719: token { $$ = 719; }; n720: token { $$ = 720; }; n721: token { $$ = 721; }; n722: token { $$ = 722; }; n723: token { $$ = 723; }; n724: token { $$ = 724; }; n725: token { $$ = 725; }; n726: token { $$ = 726; }; n727: token { $$ = 727; }; n728: token { $$ = 728; }; n729: token { $$ = 729; }; n730: token { $$ = 730; }; n731: token { $$ = 731; }; n732: token { $$ = 732; }; n733: token { $$ = 733; }; n734: token { $$ = 734; }; n735: token { $$ = 735; }; n736: token { $$ = 736; }; n737: token { $$ = 737; }; n738: token { $$ = 738; }; n739: token { $$ = 739; }; n740: token { $$ = 740; }; n741: token { $$ = 741; }; n742: token { $$ = 742; }; n743: token { $$ = 743; }; n744: token { $$ = 744; }; n745: token { $$ = 745; }; n746: token { $$ = 746; }; n747: token { $$ = 747; }; n748: token { $$ = 748; }; n749: token { $$ = 749; }; n750: token { $$ = 750; }; n751: token { $$ = 751; }; n752: token { $$ = 752; }; n753: token { $$ = 753; }; n754: token { $$ = 754; }; n755: token { $$ = 755; }; n756: token { $$ = 756; }; n757: token { $$ = 757; }; n758: token { $$ = 758; }; n759: token { $$ = 759; }; n760: token { $$ = 760; }; n761: token { $$ = 761; }; n762: token { $$ = 762; }; n763: token { $$ = 763; }; n764: token { $$ = 764; }; n765: token { $$ = 765; }; n766: token { $$ = 766; }; n767: token { $$ = 767; }; n768: token { $$ = 768; }; n769: token { $$ = 769; }; n770: token { $$ = 770; }; n771: token { $$ = 771; }; n772: token { $$ = 772; }; n773: token { $$ = 773; }; n774: token { $$ = 774; }; n775: token { $$ = 775; }; n776: token { $$ = 776; }; n777: token { $$ = 777; }; n778: token { $$ = 778; }; n779: token { $$ = 779; }; n780: token { $$ = 780; }; n781: token { $$ = 781; }; n782: token { $$ = 782; }; n783: token { $$ = 783; }; n784: token { $$ = 784; }; n785: token { $$ = 785; }; n786: token { $$ = 786; }; n787: token { $$ = 787; }; n788: token { $$ = 788; }; n789: token { $$ = 789; }; n790: token { $$ = 790; }; n791: token { $$ = 791; }; n792: token { $$ = 792; }; n793: token { $$ = 793; }; n794: token { $$ = 794; }; n795: token { $$ = 795; }; n796: token { $$ = 796; }; n797: token { $$ = 797; }; n798: token { $$ = 798; }; n799: token { $$ = 799; }; n800: token { $$ = 800; }; n801: token { $$ = 801; }; n802: token { $$ = 802; }; n803: token { $$ = 803; }; n804: token { $$ = 804; }; n805: token { $$ = 805; }; n806: token { $$ = 806; }; n807: token { $$ = 807; }; n808: token { $$ = 808; }; n809: token { $$ = 809; }; n810: token { $$ = 810; }; n811: token { $$ = 811; }; n812: token { $$ = 812; }; n813: token { $$ = 813; }; n814: token { $$ = 814; }; n815: token { $$ = 815; }; n816: token { $$ = 816; }; n817: token { $$ = 817; }; n818: token { $$ = 818; }; n819: token { $$ = 819; }; n820: token { $$ = 820; }; n821: token { $$ = 821; }; n822: token { $$ = 822; }; n823: token { $$ = 823; }; n824: token { $$ = 824; }; n825: token { $$ = 825; }; n826: token { $$ = 826; }; n827: token { $$ = 827; }; n828: token { $$ = 828; }; n829: token { $$ = 829; }; n830: token { $$ = 830; }; n831: token { $$ = 831; }; n832: token { $$ = 832; }; n833: token { $$ = 833; }; n834: token { $$ = 834; }; n835: token { $$ = 835; }; n836: token { $$ = 836; }; n837: token { $$ = 837; }; n838: token { $$ = 838; }; n839: token { $$ = 839; }; n840: token { $$ = 840; }; n841: token { $$ = 841; }; n842: token { $$ = 842; }; n843: token { $$ = 843; }; n844: token { $$ = 844; }; n845: token { $$ = 845; }; n846: token { $$ = 846; }; n847: token { $$ = 847; }; n848: token { $$ = 848; }; n849: token { $$ = 849; }; n850: token { $$ = 850; }; n851: token { $$ = 851; }; n852: token { $$ = 852; }; n853: token { $$ = 853; }; n854: token { $$ = 854; }; n855: token { $$ = 855; }; n856: token { $$ = 856; }; n857: token { $$ = 857; }; n858: token { $$ = 858; }; n859: token { $$ = 859; }; n860: token { $$ = 860; }; n861: token { $$ = 861; }; n862: token { $$ = 862; }; n863: token { $$ = 863; }; n864: token { $$ = 864; }; n865: token { $$ = 865; }; n866: token { $$ = 866; }; n867: token { $$ = 867; }; n868: token { $$ = 868; }; n869: token { $$ = 869; }; n870: token { $$ = 870; }; n871: token { $$ = 871; }; n872: token { $$ = 872; }; n873: token { $$ = 873; }; n874: token { $$ = 874; }; n875: token { $$ = 875; }; n876: token { $$ = 876; }; n877: token { $$ = 877; }; n878: token { $$ = 878; }; n879: token { $$ = 879; }; n880: token { $$ = 880; }; n881: token { $$ = 881; }; n882: token { $$ = 882; }; n883: token { $$ = 883; }; n884: token { $$ = 884; }; n885: token { $$ = 885; }; n886: token { $$ = 886; }; n887: token { $$ = 887; }; n888: token { $$ = 888; }; n889: token { $$ = 889; }; n890: token { $$ = 890; }; n891: token { $$ = 891; }; n892: token { $$ = 892; }; n893: token { $$ = 893; }; n894: token { $$ = 894; }; n895: token { $$ = 895; }; n896: token { $$ = 896; }; n897: token { $$ = 897; }; n898: token { $$ = 898; }; n899: token { $$ = 899; }; n900: token { $$ = 900; }; n901: token { $$ = 901; }; n902: token { $$ = 902; }; n903: token { $$ = 903; }; n904: token { $$ = 904; }; n905: token { $$ = 905; }; n906: token { $$ = 906; }; n907: token { $$ = 907; }; n908: token { $$ = 908; }; n909: token { $$ = 909; }; n910: token { $$ = 910; }; n911: token { $$ = 911; }; n912: token { $$ = 912; }; n913: token { $$ = 913; }; n914: token { $$ = 914; }; n915: token { $$ = 915; }; n916: token { $$ = 916; }; n917: token { $$ = 917; }; n918: token { $$ = 918; }; n919: token { $$ = 919; }; n920: token { $$ = 920; }; n921: token { $$ = 921; }; n922: token { $$ = 922; }; n923: token { $$ = 923; }; n924: token { $$ = 924; }; n925: token { $$ = 925; }; n926: token { $$ = 926; }; n927: token { $$ = 927; }; n928: token { $$ = 928; }; n929: token { $$ = 929; }; n930: token { $$ = 930; }; n931: token { $$ = 931; }; n932: token { $$ = 932; }; n933: token { $$ = 933; }; n934: token { $$ = 934; }; n935: token { $$ = 935; }; n936: token { $$ = 936; }; n937: token { $$ = 937; }; n938: token { $$ = 938; }; n939: token { $$ = 939; }; n940: token { $$ = 940; }; n941: token { $$ = 941; }; n942: token { $$ = 942; }; n943: token { $$ = 943; }; n944: token { $$ = 944; }; n945: token { $$ = 945; }; n946: token { $$ = 946; }; n947: token { $$ = 947; }; n948: token { $$ = 948; }; n949: token { $$ = 949; }; n950: token { $$ = 950; }; n951: token { $$ = 951; }; n952: token { $$ = 952; }; n953: token { $$ = 953; }; n954: token { $$ = 954; }; n955: token { $$ = 955; }; n956: token { $$ = 956; }; n957: token { $$ = 957; }; n958: token { $$ = 958; }; n959: token { $$ = 959; }; n960: token { $$ = 960; }; n961: token { $$ = 961; }; n962: token { $$ = 962; }; n963: token { $$ = 963; }; n964: token { $$ = 964; }; n965: token { $$ = 965; }; n966: token { $$ = 966; }; n967: token { $$ = 967; }; n968: token { $$ = 968; }; n969: token { $$ = 969; }; n970: token { $$ = 970; }; n971: token { $$ = 971; }; n972: token { $$ = 972; }; n973: token { $$ = 973; }; n974: token { $$ = 974; }; n975: token { $$ = 975; }; n976: token { $$ = 976; }; n977: token { $$ = 977; }; n978: token { $$ = 978; }; n979: token { $$ = 979; }; n980: token { $$ = 980; }; n981: token { $$ = 981; }; n982: token { $$ = 982; }; n983: token { $$ = 983; }; n984: token { $$ = 984; }; n985: token { $$ = 985; }; n986: token { $$ = 986; }; n987: token { $$ = 987; }; n988: token { $$ = 988; }; n989: token { $$ = 989; }; n990: token { $$ = 990; }; n991: token { $$ = 991; }; n992: token { $$ = 992; }; n993: token { $$ = 993; }; n994: token { $$ = 994; }; n995: token { $$ = 995; }; n996: token { $$ = 996; }; n997: token { $$ = 997; }; n998: token { $$ = 998; }; n999: token { $$ = 999; }; n1000: token { $$ = 1000; }; %% /* A C error reporting function. */ /* !POSIX */ static void yyerror (const char *msg) { fprintf (stderr, "%s\n", msg); } static int yylex (void) { static int return_token = 1; static int counter = 1; if (counter > MAX) { assert (counter++ == MAX + 1); return 0; } if (return_token) { return_token = 0; return token; } return_token = 1; return counter++; } #include /* getenv. */ #include /* strcmp. */ int main (int argc, char const* argv[]) { (void) argc; (void) argv; return yyparse (); } ./torture.at:393: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -v -o input.c input.y ./torture.at:237: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./torture.at:141: $PREPARSER ./input stderr: ./torture.at:141: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 604. torture.at:132: ok stderr: stdout: ./torture.at:238: $PREPARSER ./input stderr: ./torture.at:238: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 605. torture.at:216: ok 614. torture.at:485: testing Exploding the Stack Size with Alloca ... ./torture.at:494: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./torture.at:494: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS 615. torture.at:531: testing Exploding the Stack Size with Malloc ... ./torture.at:535: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./torture.at:535: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./torture.at:497: VALGRIND_OPTS="$VALGRIND_OPTS --log-fd=1" $PREPARSER ./input 20 stderr: ./torture.at:497: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./torture.at:500: VALGRIND_OPTS="$VALGRIND_OPTS --log-fd=1" $PREPARSER ./input 900 stderr: ./torture.at:500: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./torture.at:504: VALGRIND_OPTS="$VALGRIND_OPTS --log-fd=1" $PREPARSER ./input 10000 stderr: memory exhausted memory exhausted ./torture.at:504: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: memory exhausted memory exhausted ./torture.at:510: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./torture.at:510: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./torture.at:538: VALGRIND_OPTS="$VALGRIND_OPTS --log-fd=1" $PREPARSER ./input 20 stderr: ./torture.at:538: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./torture.at:541: VALGRIND_OPTS="$VALGRIND_OPTS --log-fd=1" $PREPARSER ./input 900 stderr: ./torture.at:541: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./torture.at:545: VALGRIND_OPTS="$VALGRIND_OPTS --log-fd=1" $PREPARSER ./input 10000 stderr: memory exhausted memory exhausted ./torture.at:545: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: memory exhausted memory exhausted ./torture.at:548: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./torture.at:548: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./torture.at:513: VALGRIND_OPTS="$VALGRIND_OPTS --log-fd=1" $PREPARSER ./input 20 stderr: ./torture.at:513: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./torture.at:515: VALGRIND_OPTS="$VALGRIND_OPTS --log-fd=1" $PREPARSER ./input 900 stderr: ./torture.at:515: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./torture.at:517: VALGRIND_OPTS="$VALGRIND_OPTS --log-fd=1" $PREPARSER ./input 10000 stderr: memory exhausted memory exhausted ./torture.at:517: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: memory exhausted memory exhausted 614. torture.at:485: ok stderr: stdout: ./torture.at:551: VALGRIND_OPTS="$VALGRIND_OPTS --log-fd=1" $PREPARSER ./input 20 stderr: ./torture.at:551: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./torture.at:553: VALGRIND_OPTS="$VALGRIND_OPTS --log-fd=1" $PREPARSER ./input 900 stderr: ./torture.at:553: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./torture.at:555: VALGRIND_OPTS="$VALGRIND_OPTS --log-fd=1" $PREPARSER ./input 10000 stderr: memory exhausted memory exhausted ./torture.at:555: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: memory exhausted memory exhausted 615. torture.at:531: ok 616. existing.at:74: testing GNU AWK 3.1.0 Grammar: LALR(1) ... ./existing.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y 617. existing.at:74: testing GNU AWK 3.1.0 Grammar: IELR(1) ... ./existing.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y ./existing.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Werror ./existing.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Werror stderr: input.y:66.10: error: empty rule without %empty [-Werror=empty-rule] input.y:170.10: error: empty rule without %empty [-Werror=empty-rule] input.y:175.10: error: empty rule without %empty [-Werror=empty-rule] input.y:180.10: error: empty rule without %empty [-Werror=empty-rule] input.y:188.10: error: empty rule without %empty [-Werror=empty-rule] input.y:202.10: error: empty rule without %empty [-Werror=empty-rule] input.y:207.10: error: empty rule without %empty [-Werror=empty-rule] input.y:221.10: error: empty rule without %empty [-Werror=empty-rule] input.y:300.10: error: empty rule without %empty [-Werror=empty-rule] input.y:323.10: error: empty rule without %empty [-Werror=empty-rule] input.y: error: 65 shift/reduce conflicts [-Werror=conflicts-sr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples input.y:39.1-5: error: useless associativity for FUNC_CALL, use %precedence [-Werror=precedence] input.y:44.1-5: error: useless associativity for YNUMBER, use %precedence [-Werror=precedence] input.y:44.1-5: error: useless associativity for YSTRING, use %precedence [-Werror=precedence] input.y:42.1-9: error: useless precedence and associativity for APPEND_OP [-Werror=precedence] input.y:33.1-6: error: useless associativity for ASSIGNOP, use %precedence [-Werror=precedence] input.y:43.1-5: error: useless associativity for CONCAT_OP, use %precedence [-Werror=precedence] input.y:37.1-5: error: useless precedence and associativity for LEX_GETLINE [-Werror=precedence] input.y:38.1-9: error: useless associativity for LEX_IN, use %precedence [-Werror=precedence] input.y:49.1-5: error: useless associativity for INCREMENT, use %precedence [-Werror=precedence] input.y:49.1-5: error: useless associativity for DECREMENT, use %precedence [-Werror=precedence] input.y:39.1-5: error: useless associativity for LEX_BUILTIN, use %precedence [-Werror=precedence] input.y:39.1-5: error: useless associativity for LEX_LENGTH, use %precedence [-Werror=precedence] input.y:40.1-9: error: useless precedence and associativity for ',' [-Werror=precedence] input.y:47.1-6: error: useless associativity for '!', use %precedence [-Werror=precedence] input.y:47.1-6: error: useless associativity for UNARY, use %precedence [-Werror=precedence] input.y:50.1-5: error: useless associativity for '$', use %precedence [-Werror=precedence] input.y:51.1-5: error: useless associativity for '(', use %precedence [-Werror=precedence] input.y:51.1-5: error: useless precedence and associativity for ')' [-Werror=precedence] input.y: error: fix-its can be applied. Rerun with option '--update'. [-Werror=other] ./existing.at:74: sed 's,.*/$,,' stderr 1>&2 ./existing.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=error stderr: input.y:66.10: error: empty rule without %empty [-Werror=empty-rule] input.y:170.10: error: empty rule without %empty [-Werror=empty-rule] input.y:175.10: error: empty rule without %empty [-Werror=empty-rule] input.y:180.10: error: empty rule without %empty [-Werror=empty-rule] input.y:188.10: error: empty rule without %empty [-Werror=empty-rule] input.y:202.10: error: empty rule without %empty [-Werror=empty-rule] input.y:207.10: error: empty rule without %empty [-Werror=empty-rule] input.y:221.10: error: empty rule without %empty [-Werror=empty-rule] input.y:300.10: error: empty rule without %empty [-Werror=empty-rule] input.y:323.10: error: empty rule without %empty [-Werror=empty-rule] input.y: error: 65 shift/reduce conflicts [-Werror=conflicts-sr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples input.y:39.1-5: error: useless associativity for FUNC_CALL, use %precedence [-Werror=precedence] input.y:44.1-5: error: useless associativity for YNUMBER, use %precedence [-Werror=precedence] input.y:44.1-5: error: useless associativity for YSTRING, use %precedence [-Werror=precedence] input.y:42.1-9: error: useless precedence and associativity for APPEND_OP [-Werror=precedence] input.y:33.1-6: error: useless associativity for ASSIGNOP, use %precedence [-Werror=precedence] input.y:43.1-5: error: useless associativity for CONCAT_OP, use %precedence [-Werror=precedence] input.y:37.1-5: error: useless precedence and associativity for LEX_GETLINE [-Werror=precedence] input.y:38.1-9: error: useless associativity for LEX_IN, use %precedence [-Werror=precedence] input.y:49.1-5: error: useless associativity for INCREMENT, use %precedence [-Werror=precedence] input.y:49.1-5: error: useless associativity for DECREMENT, use %precedence [-Werror=precedence] input.y:39.1-5: error: useless associativity for LEX_BUILTIN, use %precedence [-Werror=precedence] input.y:39.1-5: error: useless associativity for LEX_LENGTH, use %precedence [-Werror=precedence] input.y:40.1-9: error: useless precedence and associativity for ',' [-Werror=precedence] input.y:47.1-6: error: useless associativity for '!', use %precedence [-Werror=precedence] input.y:47.1-6: error: useless associativity for UNARY, use %precedence [-Werror=precedence] input.y:50.1-5: error: useless associativity for '$', use %precedence [-Werror=precedence] input.y:51.1-5: error: useless associativity for '(', use %precedence [-Werror=precedence] input.y:51.1-5: error: useless precedence and associativity for ')' [-Werror=precedence] input.y: error: fix-its can be applied. Rerun with option '--update'. [-Werror=other] ./existing.at:74: sed 's,.*/$,,' stderr 1>&2 ./existing.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=error ./existing.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Wnone,none -Werror --trace=none ./existing.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Wnone,none -Werror --trace=none ./existing.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=none -Werror --trace=none ./existing.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=none -Werror --trace=none ./existing.at:74: sed -n 's/^State //p' input.output | tail -1 ./existing.at:74: sed 's/^%define lr.type .*$//' input.y > input-lalr.y ./existing.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --report=all,no-cex input-lalr.y ./existing.at:74: sed -n 's/^State //p' input.output | tail -1 ./existing.at:74: sed 's/^%define lr.type .*$//' input.y > input-lalr.y ./existing.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --report=all,no-cex input-lalr.y stderr: input-lalr.y: warning: 65 shift/reduce conflicts [-Wconflicts-sr] input-lalr.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples stdout: ./existing.at:74: diff -u input-lalr.output input.output | sed -n '/^@@/,$p' | sed 's/^ $//' ./existing.at:74: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: input-lalr.y: warning: 65 shift/reduce conflicts [-Wconflicts-sr] input-lalr.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples stdout: ./existing.at:74: diff -u input-lalr.output input.output | sed -n '/^@@/,$p' | sed 's/^ $//' ./existing.at:74: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./existing.at:74: $PREPARSER ./input stderr: stderr: stdout: ./existing.at:74: $PREPARSER ./input syntax error, unexpected '*', expecting NEWLINE or '{' or ';' ./existing.at:74: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: ./existing.at:74: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 616. existing.at:74: ok 617. existing.at:74: ok 618. existing.at:74: testing GNU AWK 3.1.0 Grammar: Canonical LR(1) ... ./existing.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y 619. existing.at:808: testing GNU Cim Grammar: LALR(1) ... ./existing.at:808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y ./existing.at:808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Werror stderr: input.y:128.18: error: empty rule without %empty [-Werror=empty-rule] input.y:137.18: error: empty rule without %empty [-Werror=empty-rule] input.y:142.18: error: empty rule without %empty [-Werror=empty-rule] input.y:161.18: error: empty rule without %empty [-Werror=empty-rule] input.y:179.18: error: empty rule without %empty [-Werror=empty-rule] input.y:205.18: error: empty rule without %empty [-Werror=empty-rule] input.y:213.18: error: empty rule without %empty [-Werror=empty-rule] input.y:225.18: error: empty rule without %empty [-Werror=empty-rule] input.y:292.18: error: empty rule without %empty [-Werror=empty-rule] input.y:294.20: error: empty rule without %empty [-Werror=empty-rule] input.y:367.18: error: empty rule without %empty [-Werror=empty-rule] input.y:373.18: error: empty rule without %empty [-Werror=empty-rule] input.y:387.18: error: empty rule without %empty [-Werror=empty-rule] input.y:401.18: error: empty rule without %empty [-Werror=empty-rule] input.y:413.18: error: empty rule without %empty [-Werror=empty-rule] input.y:443.18: error: empty rule without %empty [-Werror=empty-rule] input.y:471.18: error: empty rule without %empty [-Werror=empty-rule] input.y:474.18: error: empty rule without %empty [-Werror=empty-rule] input.y:489.18: error: empty rule without %empty [-Werror=empty-rule] input.y:506.18: error: empty rule without %empty [-Werror=empty-rule] input.y:587.18: error: empty rule without %empty [-Werror=empty-rule] input.y:591.18: error: empty rule without %empty [-Werror=empty-rule] input.y: error: 78 shift/reduce conflicts [-Werror=conflicts-sr] input.y: error: 10 reduce/reduce conflicts [-Werror=conflicts-rr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples input.y:72.1-5: error: useless associativity for HQUA, use %precedence [-Werror=precedence] input.y:53.1-6: error: useless associativity for HASSIGN, use %precedence [-Werror=precedence] input.y:54.1-5: error: useless associativity for HORELSE, use %precedence [-Werror=precedence] input.y:55.1-5: error: useless associativity for HANDTHEN, use %precedence [-Werror=precedence] input.y:61.1-5: error: useless associativity for HNOT, use %precedence [-Werror=precedence] input.y:68.1-5: error: useless associativity for UNEAR, use %precedence [-Werror=precedence] input.y: error: fix-its can be applied. Rerun with option '--update'. [-Werror=other] ./existing.at:808: sed 's,.*/$,,' stderr 1>&2 ./existing.at:808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=error ./torture.at:394: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./existing.at:808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Wnone,none -Werror --trace=none ./existing.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Werror ./existing.at:808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=none -Werror --trace=none ./existing.at:808: sed -n 's/^State //p' input.output | tail -1 ./existing.at:808: sed 's/^%define lr.type .*$//' input.y > input-lalr.y ./existing.at:808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --report=all,no-cex input-lalr.y stderr: input-lalr.y: warning: 78 shift/reduce conflicts [-Wconflicts-sr] input-lalr.y: warning: 10 reduce/reduce conflicts [-Wconflicts-rr] input-lalr.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples stdout: ./existing.at:808: diff -u input-lalr.output input.output | sed -n '/^@@/,$p' | sed 's/^ $//' ./existing.at:808: grep '^State.*conflicts:' input.output ./existing.at:808: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: input.y:66.10: error: empty rule without %empty [-Werror=empty-rule] input.y:170.10: error: empty rule without %empty [-Werror=empty-rule] input.y:175.10: error: empty rule without %empty [-Werror=empty-rule] input.y:180.10: error: empty rule without %empty [-Werror=empty-rule] input.y:188.10: error: empty rule without %empty [-Werror=empty-rule] input.y:202.10: error: empty rule without %empty [-Werror=empty-rule] input.y:207.10: error: empty rule without %empty [-Werror=empty-rule] input.y:221.10: error: empty rule without %empty [-Werror=empty-rule] input.y:300.10: error: empty rule without %empty [-Werror=empty-rule] input.y:323.10: error: empty rule without %empty [-Werror=empty-rule] input.y: error: 265 shift/reduce conflicts [-Werror=conflicts-sr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples input.y:39.1-5: error: useless associativity for FUNC_CALL, use %precedence [-Werror=precedence] input.y:44.1-5: error: useless associativity for YNUMBER, use %precedence [-Werror=precedence] input.y:44.1-5: error: useless associativity for YSTRING, use %precedence [-Werror=precedence] input.y:42.1-9: error: useless precedence and associativity for APPEND_OP [-Werror=precedence] input.y:33.1-6: error: useless associativity for ASSIGNOP, use %precedence [-Werror=precedence] input.y:43.1-5: error: useless associativity for CONCAT_OP, use %precedence [-Werror=precedence] input.y:37.1-5: error: useless precedence and associativity for LEX_GETLINE [-Werror=precedence] input.y:38.1-9: error: useless associativity for LEX_IN, use %precedence [-Werror=precedence] input.y:49.1-5: error: useless associativity for INCREMENT, use %precedence [-Werror=precedence] input.y:49.1-5: error: useless associativity for DECREMENT, use %precedence [-Werror=precedence] input.y:39.1-5: error: useless associativity for LEX_BUILTIN, use %precedence [-Werror=precedence] input.y:39.1-5: error: useless associativity for LEX_LENGTH, use %precedence [-Werror=precedence] input.y:40.1-9: error: useless precedence and associativity for ',' [-Werror=precedence] input.y:47.1-6: error: useless associativity for '!', use %precedence [-Werror=precedence] input.y:47.1-6: error: useless associativity for UNARY, use %precedence [-Werror=precedence] input.y:50.1-5: error: useless associativity for '$', use %precedence [-Werror=precedence] input.y:51.1-5: error: useless associativity for '(', use %precedence [-Werror=precedence] input.y:51.1-5: error: useless precedence and associativity for ')' [-Werror=precedence] input.y: error: fix-its can be applied. Rerun with option '--update'. [-Werror=other] ./existing.at:74: sed 's,.*/$,,' stderr 1>&2 ./existing.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=error stderr: stdout: ./existing.at:808: $PREPARSER ./input stderr: ./existing.at:808: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 619. existing.at:808: ok 620. existing.at:808: testing GNU Cim Grammar: IELR(1) ... ./existing.at:808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y ./existing.at:808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Werror stderr: input.y:128.18: error: empty rule without %empty [-Werror=empty-rule] input.y:137.18: error: empty rule without %empty [-Werror=empty-rule] input.y:142.18: error: empty rule without %empty [-Werror=empty-rule] input.y:161.18: error: empty rule without %empty [-Werror=empty-rule] input.y:179.18: error: empty rule without %empty [-Werror=empty-rule] input.y:205.18: error: empty rule without %empty [-Werror=empty-rule] input.y:213.18: error: empty rule without %empty [-Werror=empty-rule] input.y:225.18: error: empty rule without %empty [-Werror=empty-rule] input.y:292.18: error: empty rule without %empty [-Werror=empty-rule] input.y:294.20: error: empty rule without %empty [-Werror=empty-rule] input.y:367.18: error: empty rule without %empty [-Werror=empty-rule] input.y:373.18: error: empty rule without %empty [-Werror=empty-rule] input.y:387.18: error: empty rule without %empty [-Werror=empty-rule] input.y:401.18: error: empty rule without %empty [-Werror=empty-rule] input.y:413.18: error: empty rule without %empty [-Werror=empty-rule] input.y:443.18: error: empty rule without %empty [-Werror=empty-rule] input.y:471.18: error: empty rule without %empty [-Werror=empty-rule] input.y:474.18: error: empty rule without %empty [-Werror=empty-rule] input.y:489.18: error: empty rule without %empty [-Werror=empty-rule] input.y:506.18: error: empty rule without %empty [-Werror=empty-rule] input.y:587.18: error: empty rule without %empty [-Werror=empty-rule] input.y:591.18: error: empty rule without %empty [-Werror=empty-rule] input.y: error: 78 shift/reduce conflicts [-Werror=conflicts-sr] input.y: error: 10 reduce/reduce conflicts [-Werror=conflicts-rr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples input.y:72.1-5: error: useless associativity for HQUA, use %precedence [-Werror=precedence] input.y:53.1-6: error: useless associativity for HASSIGN, use %precedence [-Werror=precedence] input.y:54.1-5: error: useless associativity for HORELSE, use %precedence [-Werror=precedence] input.y:55.1-5: error: useless associativity for HANDTHEN, use %precedence [-Werror=precedence] input.y:61.1-5: error: useless associativity for HNOT, use %precedence [-Werror=precedence] input.y:68.1-5: error: useless associativity for UNEAR, use %precedence [-Werror=precedence] input.y: error: fix-its can be applied. Rerun with option '--update'. [-Werror=other] ./existing.at:808: sed 's,.*/$,,' stderr 1>&2 ./existing.at:808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=error ./existing.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Wnone,none -Werror --trace=none ./existing.at:808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Wnone,none -Werror --trace=none ./existing.at:808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=none -Werror --trace=none ./existing.at:808: sed -n 's/^State //p' input.output | tail -1 ./existing.at:808: sed 's/^%define lr.type .*$//' input.y > input-lalr.y ./existing.at:808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --report=all,no-cex input-lalr.y stderr: input-lalr.y: warning: 78 shift/reduce conflicts [-Wconflicts-sr] input-lalr.y: warning: 10 reduce/reduce conflicts [-Wconflicts-rr] input-lalr.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples stdout: ./existing.at:808: diff -u input-lalr.output input.output | sed -n '/^@@/,$p' | sed 's/^ $//' ./existing.at:808: grep '^State.*conflicts:' input.output ./existing.at:808: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./existing.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=none -Werror --trace=none stderr: stdout: ./existing.at:808: $PREPARSER ./input stderr: ./existing.at:808: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 620. existing.at:808: ok 621. existing.at:808: testing GNU Cim Grammar: Canonical LR(1) ... ./existing.at:808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y ./existing.at:74: sed -n 's/^State //p' input.output | tail -1 ./existing.at:74: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./existing.at:74: $PREPARSER ./input stderr: ./existing.at:74: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 618. existing.at:74: ok 622. existing.at:1460: testing GNU pic (Groff 1.18.1) Grammar: LALR(1) ... ./existing.at:1460: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y ./existing.at:1460: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Werror stderr: stdout: ./torture.at:395: $PREPARSER ./input stderr: ./torture.at:395: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 613. torture.at:385: ok 623. existing.at:1460: testing GNU pic (Groff 1.18.1) Grammar: IELR(1) ... ./existing.at:1460: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y stderr: input.y:202.20: error: empty rule without %empty [-Werror=empty-rule] input.y:270.7: error: empty rule without %empty [-Werror=empty-rule] input.y:292.13: error: empty rule without %empty [-Werror=empty-rule] input.y:309.18: error: empty rule without %empty [-Werror=empty-rule] input.y:382.14: error: empty rule without %empty [-Werror=empty-rule] input.y:471.11-48: error: rule useless in parser due to conflicts [-Werror=other] input.y:154.1-5: error: useless associativity for LABEL, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for VARIABLE, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for NUMBER, use %precedence [-Werror=precedence] input.y:141.1-5: error: useless associativity for TEXT, use %precedence [-Werror=precedence] input.y:157.1-5: error: useless associativity for ORDINAL, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for LAST, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless associativity for UP, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless associativity for DOWN, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for BOX, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for CIRCLE, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for ELLIPSE, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for ARC, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for LINE, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for ARROW, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for SPLINE, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for HEIGHT, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for RADIUS, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for WIDTH, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for DIAMETER, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for FROM, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for TO, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for AT, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless precedence and associativity for SOLID [-Werror=precedence] input.y:153.1-5: error: useless associativity for DOTTED, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless associativity for DASHED, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless associativity for CHOP, use %precedence [-Werror=precedence] input.y:147.1-5: error: useless precedence and associativity for LJUST [-Werror=precedence] input.y:147.1-5: error: useless precedence and associativity for RJUST [-Werror=precedence] input.y:147.1-5: error: useless precedence and associativity for ABOVE [-Werror=precedence] input.y:147.1-5: error: useless precedence and associativity for BELOW [-Werror=precedence] input.y:176.1-5: error: useless associativity for OF, use %precedence [-Werror=precedence] input.y:176.1-5: error: useless associativity for BETWEEN, use %precedence [-Werror=precedence] input.y:177.1-5: error: useless associativity for AND, use %precedence [-Werror=precedence] input.y:157.1-5: error: useless associativity for HERE, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_N, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_E, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_W, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_S, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_NE, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_SE, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_NW, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_SW, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_C, use %precedence [-Werror=precedence] input.y:167.1-5: error: useless associativity for DOT_START, use %precedence [-Werror=precedence] input.y:167.1-5: error: useless associativity for DOT_END, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for SIN, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for COS, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for ATAN2, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for LOG, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for EXP, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for SQRT, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for K_MAX, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for K_MIN, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for INT, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for RAND, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for SRAND, use %precedence [-Werror=precedence] input.y:167.1-5: error: useless associativity for TOP, use %precedence [-Werror=precedence] input.y:167.1-5: error: useless associativity for BOTTOM, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for UPPER, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for LOWER, use %precedence [-Werror=precedence] input.y:167.1-5: error: useless associativity for LEFT_CORNER, use %precedence [-Werror=precedence] input.y:167.1-5: error: useless associativity for RIGHT_CORNER, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for NORTH, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for SOUTH, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for EAST, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for WEST, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for CENTER, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for END, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for START, use %precedence [-Werror=precedence] input.y:140.1-5: error: useless associativity for PLOT, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for THICKNESS, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless associativity for FILL, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless precedence and associativity for COLORED [-Werror=precedence] input.y:153.1-5: error: useless precedence and associativity for OUTLINED [-Werror=precedence] input.y:141.1-5: error: useless associativity for SPRINTF, use %precedence [-Werror=precedence] input.y:137.1-5: error: useless associativity for '.', use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for '(', use %precedence [-Werror=precedence] input.y:157.1-5: error: useless associativity for '`', use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for '[', use %precedence [-Werror=precedence] input.y:170.1-5: error: useless associativity for ',', use %precedence [-Werror=precedence] input.y:181.1-6: error: useless associativity for '!', use %precedence [-Werror=precedence] input.y: error: fix-its can be applied. Rerun with option '--update'. [-Werror=other] ./existing.at:1460: sed 's,.*/$,,' stderr 1>&2 ./existing.at:1460: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=error ./existing.at:1460: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Werror ./existing.at:1460: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Wnone,none -Werror --trace=none stderr: input.y:202.20: error: empty rule without %empty [-Werror=empty-rule] input.y:270.7: error: empty rule without %empty [-Werror=empty-rule] input.y:292.13: error: empty rule without %empty [-Werror=empty-rule] input.y:309.18: error: empty rule without %empty [-Werror=empty-rule] input.y:382.14: error: empty rule without %empty [-Werror=empty-rule] input.y:471.11-48: error: rule useless in parser due to conflicts [-Werror=other] input.y:154.1-5: error: useless associativity for LABEL, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for VARIABLE, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for NUMBER, use %precedence [-Werror=precedence] input.y:141.1-5: error: useless associativity for TEXT, use %precedence [-Werror=precedence] input.y:157.1-5: error: useless associativity for ORDINAL, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for LAST, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless associativity for UP, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless associativity for DOWN, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for BOX, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for CIRCLE, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for ELLIPSE, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for ARC, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for LINE, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for ARROW, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for SPLINE, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for HEIGHT, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for RADIUS, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for WIDTH, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for DIAMETER, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for FROM, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for TO, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for AT, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless precedence and associativity for SOLID [-Werror=precedence] input.y:153.1-5: error: useless associativity for DOTTED, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless associativity for DASHED, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless associativity for CHOP, use %precedence [-Werror=precedence] input.y:147.1-5: error: useless precedence and associativity for LJUST [-Werror=precedence] input.y:147.1-5: error: useless precedence and associativity for RJUST [-Werror=precedence] input.y:147.1-5: error: useless precedence and associativity for ABOVE [-Werror=precedence] input.y:147.1-5: error: useless precedence and associativity for BELOW [-Werror=precedence] input.y:176.1-5: error: useless associativity for OF, use %precedence [-Werror=precedence] input.y:176.1-5: error: useless associativity for BETWEEN, use %precedence [-Werror=precedence] input.y:177.1-5: error: useless associativity for AND, use %precedence [-Werror=precedence] input.y:157.1-5: error: useless associativity for HERE, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_N, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_E, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_W, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_S, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_NE, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_SE, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_NW, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_SW, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_C, use %precedence [-Werror=precedence] input.y:167.1-5: error: useless associativity for DOT_START, use %precedence [-Werror=precedence] input.y:167.1-5: error: useless associativity for DOT_END, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for SIN, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for COS, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for ATAN2, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for LOG, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for EXP, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for SQRT, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for K_MAX, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for K_MIN, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for INT, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for RAND, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for SRAND, use %precedence [-Werror=precedence] input.y:167.1-5: error: useless associativity for TOP, use %precedence [-Werror=precedence] input.y:167.1-5: error: useless associativity for BOTTOM, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for UPPER, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for LOWER, use %precedence [-Werror=precedence] input.y:167.1-5: error: useless associativity for LEFT_CORNER, use %precedence [-Werror=precedence] input.y:167.1-5: error: useless associativity for RIGHT_CORNER, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for NORTH, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for SOUTH, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for EAST, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for WEST, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for CENTER, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for END, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for START, use %precedence [-Werror=precedence] input.y:140.1-5: error: useless associativity for PLOT, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for THICKNESS, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless associativity for FILL, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless precedence and associativity for COLORED [-Werror=precedence] input.y:153.1-5: error: useless precedence and associativity for OUTLINED [-Werror=precedence] input.y:141.1-5: error: useless associativity for SPRINTF, use %precedence [-Werror=precedence] input.y:137.1-5: error: useless associativity for '.', use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for '(', use %precedence [-Werror=precedence] input.y:157.1-5: error: useless associativity for '`', use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for '[', use %precedence [-Werror=precedence] input.y:170.1-5: error: useless associativity for ',', use %precedence [-Werror=precedence] input.y:181.1-6: error: useless associativity for '!', use %precedence [-Werror=precedence] input.y: error: fix-its can be applied. Rerun with option '--update'. [-Werror=other] ./existing.at:1460: sed 's,.*/$,,' stderr 1>&2 ./existing.at:1460: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=error ./existing.at:1460: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=none -Werror --trace=none ./existing.at:1460: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Wnone,none -Werror --trace=none ./existing.at:1460: sed -n 's/^State //p' input.output | tail -1 ./existing.at:1460: sed 's/^%define lr.type .*$//' input.y > input-lalr.y ./existing.at:1460: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --report=all,no-cex input-lalr.y ./existing.at:1460: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=none -Werror --trace=none stderr: input-lalr.y:471.11-48: warning: rule useless in parser due to conflicts [-Wother] stdout: ./existing.at:1460: diff -u input-lalr.output input.output | sed -n '/^@@/,$p' | sed 's/^ $//' ./existing.at:1460: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./existing.at:1460: sed -n 's/^State //p' input.output | tail -1 ./existing.at:1460: sed 's/^%define lr.type .*$//' input.y > input-lalr.y ./existing.at:1460: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --report=all,no-cex input-lalr.y stderr: stdout: ./existing.at:1460: $PREPARSER ./input stderr: syntax error, unexpected LEFT ./existing.at:1460: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 622. existing.at:1460: ok 624. existing.at:1460: testing GNU pic (Groff 1.18.1) Grammar: Canonical LR(1) ... ./existing.at:1460: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y stderr: input-lalr.y:471.11-48: warning: rule useless in parser due to conflicts [-Wother] stdout: ./existing.at:1460: diff -u input-lalr.output input.output | sed -n '/^@@/,$p' | sed 's/^ $//' ./existing.at:1460: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./existing.at:1460: $PREPARSER ./input stderr: ./existing.at:1460: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 623. existing.at:1460: ok 625. regression.at:25: testing Trivial grammars ... ./regression.at:43: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./regression.at:44: $CC $CFLAGS $CPPFLAGS -c -o input.o input.c stderr: stdout: ./regression.at:45: $CC $CFLAGS $CPPFLAGS -c -o input.o -DYYDEBUG -c input.c stderr: stdout: 625. regression.at:25: ok 626. regression.at:55: testing YYSTYPE typedef ... ./regression.at:73: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./regression.at:74: $CC $CFLAGS $CPPFLAGS -c -o input.o input.c stderr: stdout: 626. regression.at:55: ok 627. regression.at:85: testing Early token definitions with --yacc ... ./regression.at:115: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --yacc -o input.c input.y ./regression.at:116: $CC $CFLAGS $CPPFLAGS -c -o input.o input.c stderr: stdout: 627. regression.at:85: ok 628. regression.at:127: testing Early token definitions without --yacc ... ./regression.at:161: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./regression.at:162: $CC $CFLAGS $CPPFLAGS -c -o input.o input.c stderr: stdout: 628. regression.at:127: ok 629. regression.at:173: testing Braces parsing ... ./regression.at:185: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -v -o input.c input.y ./regression.at:187: grep 'tests = {{{{{{{{{{}}}}}}}}}};' input.c stdout: { tests = {{{{{{{{{{}}}}}}}}}}; } 629. regression.at:173: ok 630. regression.at:196: testing Rule Line Numbers ... ./regression.at:232: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c -v input.y ./regression.at:235: cat input.output 630. regression.at:196: ok 631. regression.at:345: testing Mixing %token styles ... ./regression.at:357: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -v -Wall -o input.c input.y ./regression.at:357: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -v -Wall -o input.c input.y -Werror stderr: input.y:3.1-5: error: useless precedence and associativity for "||" [-Werror=precedence] input.y:3.1-5: error: useless precedence and associativity for "<=" [-Werror=precedence] ./regression.at:357: sed 's,.*/$,,' stderr 1>&2 ./regression.at:357: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -v -Wall -o input.c input.y --warnings=error ./regression.at:357: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -v -Wall -o input.c input.y -Wnone,none -Werror --trace=none ./regression.at:357: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -v -Wall -o input.c input.y --warnings=none -Werror --trace=none 631. regression.at:345: ok 632. regression.at:437: testing Token definitions: parse.error=detailed ... ./regression.at:437: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -o input.c input.y ./regression.at:437: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y -Werror stderr: input.y:26.8-14: error: symbol SPECIAL redeclared [-Werror=other] 26 | %token SPECIAL "\\\'\?\"\a\b\f\n\r\t\v\001\201\x001\x000081??!" | ^~~~~~~ input.y:25.8-14: note: previous declaration 25 | %token SPECIAL "\\\'\?\"\a\b\f\n\r\t\v\001\201\x001\x000081??!" | ^~~~~~~ input.y:26.16-63: error: symbol "\\\'\?\"\a\b\f\n\r\t\v\001\201\x001\x000081??!" used more than once as a literal string [-Werror=other] 26 | %token SPECIAL "\\\'\?\"\a\b\f\n\r\t\v\001\201\x001\x000081??!" | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ./regression.at:437: sed 's,.*/$,,' stderr 1>&2 ./regression.at:437: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y --warnings=error ./regression.at:437: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y -Wnone,none -Werror --trace=none ./regression.at:437: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y --warnings=none -Werror --trace=none ./regression.at:437: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./regression.at:437: $PREPARSER ./input stderr: syntax error, unexpected a, expecting ∃¬∩∪∀ ./regression.at:437: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 632. regression.at:437: ok 633. regression.at:438: testing Token definitions: parse.error=verbose ... ./regression.at:438: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -o input.c input.y ./regression.at:438: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y -Werror stderr: input.y:26.8-14: error: symbol SPECIAL redeclared [-Werror=other] 26 | %token SPECIAL "\\\'\?\"\a\b\f\n\r\t\v\001\201\x001\x000081??!" | ^~~~~~~ input.y:25.8-14: note: previous declaration 25 | %token SPECIAL "\\\'\?\"\a\b\f\n\r\t\v\001\201\x001\x000081??!" | ^~~~~~~ input.y:26.16-63: error: symbol "\\\'\?\"\a\b\f\n\r\t\v\001\201\x001\x000081??!" used more than once as a literal string [-Werror=other] 26 | %token SPECIAL "\\\'\?\"\a\b\f\n\r\t\v\001\201\x001\x000081??!" | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ./regression.at:438: sed 's,.*/$,,' stderr 1>&2 ./regression.at:438: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y --warnings=error ./regression.at:438: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y -Wnone,none -Werror --trace=none ./regression.at:438: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret -o input.c input.y --warnings=none -Werror --trace=none ./regression.at:438: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./regression.at:438: $PREPARSER ./input stderr: syntax error, unexpected a, expecting "\\\'\?\"\a\b\f\n\r\t\v\001\201\x001\x000081??!" ./regression.at:438: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 633. regression.at:438: ok 634. regression.at:447: testing Characters Escapes ... ./regression.at:465: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./regression.at:466: $CC $CFLAGS $CPPFLAGS -c -o input.o input.c stderr: stdout: 634. regression.at:447: ok 635. regression.at:480: testing Web2c Report ... ./regression.at:505: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -v input.y ./regression.at:506: cat input.output 635. regression.at:480: ok ./existing.at:808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Werror 636. regression.at:661: testing Web2c Actions ... ./regression.at:674: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -v -o input.c input.y ./regression.at:679: cat tables.c 636. regression.at:661: ok ./existing.at:1460: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Werror 637. regression.at:812: testing Useless Tokens ... ./regression.at:912: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-other -o input.c input.y ./regression.at:917: cat tables.c 637. regression.at:812: ok 638. regression.at:1143: testing Dancer ... ./regression.at:1143: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o dancer.c dancer.y ./regression.at:1143: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o dancer dancer.c $LIBS stderr: stdout: ./regression.at:1143: $PREPARSER ./dancer stderr: syntax error, unexpected ':' ./regression.at:1143: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 638. regression.at:1143: ok 639. regression.at:1144: testing Dancer %glr-parser ... ./regression.at:1144: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o dancer.c dancer.y ./regression.at:1144: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o dancer dancer.c $LIBS stderr: stdout: ./regression.at:1144: $PREPARSER ./dancer stderr: syntax error, unexpected ':' ./regression.at:1144: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 639. regression.at:1144: ok 640. regression.at:1145: testing Dancer lalr1.cc ... ./regression.at:1145: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o dancer.cc dancer.y ./regression.at:1145: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o dancer dancer.cc $LIBS stderr: stdout: ./regression.at:1145: $PREPARSER ./dancer stderr: syntax error, unexpected ':' ./regression.at:1145: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 640. regression.at:1145: ok 641. regression.at:1220: testing Expecting two tokens ... ./regression.at:1220: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o expect2.c expect2.y ./regression.at:1220: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o expect2 expect2.c $LIBS stderr: stdout: ./regression.at:1220: $PREPARSER ./expect2 stderr: syntax error, unexpected '+', expecting A or B ./regression.at:1220: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 641. regression.at:1220: ok 642. regression.at:1221: testing Expecting two tokens %glr-parser ... ./regression.at:1221: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o expect2.c expect2.y ./regression.at:1221: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o expect2 expect2.c $LIBS stderr: input.y:202.20: error: empty rule without %empty [-Werror=empty-rule] input.y:270.7: error: empty rule without %empty [-Werror=empty-rule] input.y:292.13: error: empty rule without %empty [-Werror=empty-rule] input.y:309.18: error: empty rule without %empty [-Werror=empty-rule] input.y:382.14: error: empty rule without %empty [-Werror=empty-rule] input.y:471.11-48: error: rule useless in parser due to conflicts [-Werror=other] input.y:154.1-5: error: useless associativity for LABEL, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for VARIABLE, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for NUMBER, use %precedence [-Werror=precedence] input.y:141.1-5: error: useless associativity for TEXT, use %precedence [-Werror=precedence] input.y:157.1-5: error: useless associativity for ORDINAL, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for LAST, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless associativity for UP, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless associativity for DOWN, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for BOX, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for CIRCLE, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for ELLIPSE, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for ARC, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for LINE, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for ARROW, use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for SPLINE, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for HEIGHT, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for RADIUS, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for WIDTH, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for DIAMETER, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for FROM, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for TO, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for AT, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless precedence and associativity for SOLID [-Werror=precedence] input.y:153.1-5: error: useless associativity for DOTTED, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless associativity for DASHED, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless associativity for CHOP, use %precedence [-Werror=precedence] input.y:147.1-5: error: useless precedence and associativity for LJUST [-Werror=precedence] input.y:147.1-5: error: useless precedence and associativity for RJUST [-Werror=precedence] input.y:147.1-5: error: useless precedence and associativity for ABOVE [-Werror=precedence] input.y:147.1-5: error: useless precedence and associativity for BELOW [-Werror=precedence] input.y:176.1-5: error: useless associativity for OF, use %precedence [-Werror=precedence] input.y:176.1-5: error: useless associativity for BETWEEN, use %precedence [-Werror=precedence] input.y:177.1-5: error: useless associativity for AND, use %precedence [-Werror=precedence] input.y:157.1-5: error: useless associativity for HERE, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_N, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_E, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_W, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_S, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_NE, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_SE, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_NW, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_SW, use %precedence [-Werror=precedence] input.y:166.1-5: error: useless associativity for DOT_C, use %precedence [-Werror=precedence] input.y:167.1-5: error: useless associativity for DOT_START, use %precedence [-Werror=precedence] input.y:167.1-5: error: useless associativity for DOT_END, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for SIN, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for COS, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for ATAN2, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for LOG, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for EXP, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for SQRT, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for K_MAX, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for K_MIN, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for INT, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for RAND, use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for SRAND, use %precedence [-Werror=precedence] input.y:167.1-5: error: useless associativity for TOP, use %precedence [-Werror=precedence] input.y:167.1-5: error: useless associativity for BOTTOM, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for UPPER, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for LOWER, use %precedence [-Werror=precedence] input.y:167.1-5: error: useless associativity for LEFT_CORNER, use %precedence [-Werror=precedence] input.y:167.1-5: error: useless associativity for RIGHT_CORNER, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for NORTH, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for SOUTH, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for EAST, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for WEST, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for CENTER, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for END, use %precedence [-Werror=precedence] input.y:168.1-5: error: useless associativity for START, use %precedence [-Werror=precedence] input.y:140.1-5: error: useless associativity for PLOT, use %precedence [-Werror=precedence] input.y:162.1-5: error: useless associativity for THICKNESS, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless associativity for FILL, use %precedence [-Werror=precedence] input.y:153.1-5: error: useless precedence and associativity for COLORED [-Werror=precedence] input.y:153.1-5: error: useless precedence and associativity for OUTLINED [-Werror=precedence] input.y:141.1-5: error: useless associativity for SPRINTF, use %precedence [-Werror=precedence] input.y:137.1-5: error: useless associativity for '.', use %precedence [-Werror=precedence] input.y:156.1-5: error: useless associativity for '(', use %precedence [-Werror=precedence] input.y:157.1-5: error: useless associativity for '`', use %precedence [-Werror=precedence] input.y:159.1-5: error: useless associativity for '[', use %precedence [-Werror=precedence] input.y:170.1-5: error: useless associativity for ',', use %precedence [-Werror=precedence] input.y:181.1-6: error: useless associativity for '!', use %precedence [-Werror=precedence] input.y: error: fix-its can be applied. Rerun with option '--update'. [-Werror=other] ./existing.at:1460: sed 's,.*/$,,' stderr 1>&2 ./existing.at:1460: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=error stderr: stdout: ./regression.at:1221: $PREPARSER ./expect2 stderr: syntax error, unexpected '+', expecting A or B ./regression.at:1221: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 642. regression.at:1221: ok 643. regression.at:1222: testing Expecting two tokens lalr1.cc ... ./regression.at:1222: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o expect2.cc expect2.y ./regression.at:1222: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o expect2 expect2.cc $LIBS stderr: stdout: ./regression.at:1222: $PREPARSER ./expect2 stderr: syntax error, unexpected '+', expecting A or B ./regression.at:1222: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 643. regression.at:1222: ok 644. regression.at:1230: testing Braced code in declaration in rules section ... ./regression.at:1261: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./regression.at:1262: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./regression.at:1263: $PREPARSER ./input --debug stderr: Starting parse Entering state 0 Stack now 0 Reducing stack by rule 1 (line 20): -> $$ = nterm start () Entering state 1 Stack now 0 1 Reading a token Next token is token 'a' (PRINTER) syntax error, unexpected 'a', expecting end of file Error: popping nterm start () Stack now 0 Cleanup: discarding lookahead token 'a' (PRINTER) DESTRUCTOR Stack now 0 ./regression.at:1263: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 644. regression.at:1230: ok 645. regression.at:1291: testing String alias declared after use ... ./regression.at:1304: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 645. regression.at:1291: ok 646. regression.at:1314: testing Extra lookahead sets in report ... ./regression.at:1329: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret --report=all input.y ./regression.at:1330: sed -n '/^State 1$/,/^State 2$/p' input.output 646. regression.at:1314: ok 647. regression.at:1355: testing Token number in precedence declaration ... ./regression.at:1388: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wall -o input.c input.y ./regression.at:1388: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall -o input.c input.y -Werror stderr: input.y:24.5-19: error: rule useless in parser due to conflicts [-Werror=other] input.y:28.5-19: error: rule useless in parser due to conflicts [-Werror=other] input.y:18.1-5: error: useless precedence and associativity for TK1 [-Werror=precedence] ./regression.at:1388: sed 's,.*/$,,' stderr 1>&2 ./regression.at:1388: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall -o input.c input.y --warnings=error ./regression.at:1388: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall -o input.c input.y -Wnone,none -Werror --trace=none ./regression.at:1388: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall -o input.c input.y --warnings=none -Werror --trace=none ./regression.at:1393: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./regression.at:1394: $PREPARSER ./input stderr: ./regression.at:1394: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 647. regression.at:1355: ok 648. regression.at:1408: testing parse-gram.y: LALR = IELR ... ./regression.at:1414: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c -Dlr.type=lalr input.y ./regression.at:1417: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c -Dlr.type=ielr input.y ./regression.at:1420: diff lalr.c ielr.c 648. regression.at:1408: ok 649. regression.at:1430: testing parse.error=verbose and YYSTACK_USE_ALLOCA ... ./regression.at:1481: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./regression.at:1482: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./existing.at:1460: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Wnone,none -Werror --trace=none stderr: input.y:128.18: error: empty rule without %empty [-Werror=empty-rule] input.y:137.18: error: empty rule without %empty [-Werror=empty-rule] input.y:142.18: error: empty rule without %empty [-Werror=empty-rule] input.y:161.18: error: empty rule without %empty [-Werror=empty-rule] input.y:179.18: error: empty rule without %empty [-Werror=empty-rule] input.y:205.18: error: empty rule without %empty [-Werror=empty-rule] input.y:213.18: error: empty rule without %empty [-Werror=empty-rule] input.y:225.18: error: empty rule without %empty [-Werror=empty-rule] input.y:292.18: error: empty rule without %empty [-Werror=empty-rule] input.y:294.20: error: empty rule without %empty [-Werror=empty-rule] input.y:367.18: error: empty rule without %empty [-Werror=empty-rule] input.y:373.18: error: empty rule without %empty [-Werror=empty-rule] input.y:387.18: error: empty rule without %empty [-Werror=empty-rule] input.y:401.18: error: empty rule without %empty [-Werror=empty-rule] input.y:413.18: error: empty rule without %empty [-Werror=empty-rule] input.y:443.18: error: empty rule without %empty [-Werror=empty-rule] input.y:471.18: error: empty rule without %empty [-Werror=empty-rule] input.y:474.18: error: empty rule without %empty [-Werror=empty-rule] input.y:489.18: error: empty rule without %empty [-Werror=empty-rule] input.y:506.18: error: empty rule without %empty [-Werror=empty-rule] input.y:587.18: error: empty rule without %empty [-Werror=empty-rule] input.y:591.18: error: empty rule without %empty [-Werror=empty-rule] input.y: error: 1876 shift/reduce conflicts [-Werror=conflicts-sr] input.y: error: 144 reduce/reduce conflicts [-Werror=conflicts-rr] input.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples input.y:72.1-5: error: useless associativity for HQUA, use %precedence [-Werror=precedence] input.y:53.1-6: error: useless associativity for HASSIGN, use %precedence [-Werror=precedence] input.y:54.1-5: error: useless associativity for HORELSE, use %precedence [-Werror=precedence] input.y:55.1-5: error: useless associativity for HANDTHEN, use %precedence [-Werror=precedence] input.y:61.1-5: error: useless associativity for HNOT, use %precedence [-Werror=precedence] input.y:68.1-5: error: useless associativity for UNEAR, use %precedence [-Werror=precedence] input.y: error: fix-its can be applied. Rerun with option '--update'. [-Werror=other] ./existing.at:808: sed 's,.*/$,,' stderr 1>&2 ./existing.at:808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=error stderr: stdout: ./regression.at:1483: $PREPARSER ./input stderr: syntax error, unexpected 'a', expecting 123456789112345678921234567893123456789412345678951234567896123A or 123456789112345678921234567893123456789412345678951234567896123B syntax error, unexpected end of file, expecting 123456789112345678921234567893123456789412345678951234567896123A or 123456789112345678921234567893123456789412345678951234567896123B ./regression.at:1483: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 649. regression.at:1430: ok 650. regression.at:1504: testing parse.error=verbose overflow ... ./regression.at:1604: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./regression.at:1611: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./regression.at:1613: $PREPARSER ./input stderr: syntax error, unexpected 'a', expecting 123456789112345678921234567893123456789412345678951234567896123A or 123456789112345678921234567893123456789412345678951234567896123B or 123456789112345678921234567893123456789412345678951234567896123C syntax error, unexpected 'd' syntax error memory exhausted ./regression.at:1613: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 650. regression.at:1504: ok 651. regression.at:1628: testing LAC: Exploratory stack ... ./regression.at:1713: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dparse.lac=full \ -Dparse.lac.es-capacity-initial=1 \ -Dparse.lac.memory-trace=full -o input.c input.y ./regression.at:1713: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./regression.at:1713: $PREPARSER ./input --debug > stdout.txt 2> stderr.txt stderr: ./regression.at:1713: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./regression.at:1713: grep 'syntax error,' stderr.txt ./regression.at:1713: "$PERL" -0777 -ne 'print s/inconsistent default reduction//g;' stdout.txt ./regression.at:1713: "$PERL" -0777 -ne 'print s/\bconsistent default reduction//g;' stdout.txt ./regression.at:1713: "$PERL" -0777 -ne 'print s/\(realloc//g;' < stderr.txt ./regression.at:1714: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dparse.lac=full \ -Dparse.lac.es-capacity-initial=1 \ -Dparse.lac.memory-trace=full -o input.c input.y ./regression.at:1714: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./regression.at:1714: $PREPARSER ./input --debug > stdout.txt 2> stderr.txt stderr: ./regression.at:1714: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./regression.at:1714: grep 'syntax error,' stderr.txt ./regression.at:1714: "$PERL" -0777 -ne 'print s/inconsistent default reduction//g;' stdout.txt ./regression.at:1714: "$PERL" -0777 -ne 'print s/\bconsistent default reduction//g;' stdout.txt ./regression.at:1714: "$PERL" -0777 -ne 'print s/\(realloc//g;' < stderr.txt ./regression.at:1715: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dparse.lac=full \ -Dparse.lac.es-capacity-initial=1 \ -Dparse.lac.memory-trace=full -o input.c input.y ./regression.at:1715: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./regression.at:1715: $PREPARSER ./input --debug > stdout.txt 2> stderr.txt stderr: ./regression.at:1715: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./regression.at:1715: grep 'syntax error,' stderr.txt ./regression.at:1715: "$PERL" -0777 -ne 'print s/inconsistent default reduction//g;' stdout.txt ./regression.at:1715: "$PERL" -0777 -ne 'print s/\bconsistent default reduction//g;' stdout.txt ./regression.at:1715: "$PERL" -0777 -ne 'print s/\(realloc//g;' < stderr.txt ./regression.at:1716: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dparse.lac=full \ -Dparse.lac.es-capacity-initial=1 \ -Dparse.lac.memory-trace=full -o input.c input.y ./regression.at:1716: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./regression.at:1716: $PREPARSER ./input --debug > stdout.txt 2> stderr.txt stderr: ./regression.at:1716: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./regression.at:1716: grep 'syntax error,' stderr.txt ./regression.at:1716: "$PERL" -0777 -ne 'print s/inconsistent default reduction//g;' stdout.txt ./regression.at:1716: "$PERL" -0777 -ne 'print s/\bconsistent default reduction//g;' stdout.txt ./regression.at:1716: "$PERL" -0777 -ne 'print s/\(realloc//g;' < stderr.txt ./regression.at:1719: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dparse.lac=full -o input.cc input.y ./regression.at:1719: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS ./existing.at:1460: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=none -Werror --trace=none stderr: stdout: ./regression.at:1719: $PREPARSER ./input --debug > stdout.txt 2> stderr.txt stderr: ./regression.at:1719: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./regression.at:1719: grep 'syntax error,' stderr.txt ./regression.at:1719: "$PERL" -0777 -ne 'print s/inconsistent default reduction//g;' stdout.txt ./regression.at:1719: "$PERL" -0777 -ne 'print s/\bconsistent default reduction//g;' stdout.txt ./regression.at:1727: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dparse.lac=full -o input.java input.y 651. regression.at:1628: skipped (regression.at:1727) 652. regression.at:1739: testing LAC: Memory exhaustion ... ./regression.at:1771: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dparse.lac=full -Dparse.lac.es-capacity-initial=1 -o input.c input.y ./regression.at:1771: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./regression.at:1771: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./regression.at:1772: $PREPARSER ./input --debug stderr: Starting parse Entering state 0 Stack now 0 Reading a token Now at end of input. LAC: initial context established for "end of file" LAC: checking lookahead "end of file": R2 G3 R2 G5 R2 G6 R2 G7 R2 G8 R2 G9 R2 G10 R2 G11 R2 (max size exceeded) memory exhausted Cleanup: discarding lookahead token "end of file" () Stack now 0 ./regression.at:1772: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./regression.at:1787: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dparse.lac=full -Dparse.lac.es-capacity-initial=1 -o input.c input.y ./regression.at:1787: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./regression.at:1787: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./regression.at:1788: $PREPARSER ./input --debug stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token "invalid token" () LAC: initial context established for "invalid token" LAC: checking lookahead "invalid token": Always Err Constructing syntax error message LAC: checking lookahead "end of file": R2 G3 R2 G5 R2 G6 R2 G7 R2 G8 R2 G9 R2 G10 R2 G11 R2 (max size exceeded) syntax error memory exhausted Cleanup: discarding lookahead token "invalid token" () Stack now 0 ./regression.at:1788: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 652. regression.at:1739: ok 653. regression.at:1874: testing Lex and parse params: yacc.c ... ./regression.at:1874: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./regression.at:1874: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./regression.at:1874: $PREPARSER ./input stderr: ./regression.at:1874: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 653. regression.at:1874: ok 654. regression.at:1875: testing Lex and parse params: glr.c ... ./regression.at:1875: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./regression.at:1875: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./existing.at:808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y -Wnone,none -Werror --trace=none ./existing.at:1460: sed -n 's/^State //p' input.output | tail -1 ./existing.at:1460: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./regression.at:1875: $PREPARSER ./input stderr: ./regression.at:1875: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 654. regression.at:1875: ok 655. regression.at:1876: testing Lex and parse params: lalr1.cc ... ./regression.at:1876: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./regression.at:1876: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./existing.at:1460: $PREPARSER ./input stderr: ./existing.at:1460: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 624. existing.at:1460: ok 656. regression.at:1877: testing Lex and parse params: glr.cc ... ./regression.at:1877: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./regression.at:1877: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./regression.at:1876: $PREPARSER ./input stderr: ./regression.at:1876: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 655. regression.at:1876: ok 657. regression.at:1878: testing Lex and parse params: glr2.cc ... ./regression.at:1878: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./regression.at:1878: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./regression.at:1877: $PREPARSER ./input stderr: ./regression.at:1877: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 656. regression.at:1877: ok 658. regression.at:1889: testing stdio.h is not needed ... ./regression.at:1906: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./regression.at:1906: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: 658. regression.at:1889: ok 659. push.at:25: testing Memory Leak for Early Deletion ... ./push.at:74: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./push.at:75: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./push.at:76: $PREPARSER ./input stderr: ./push.at:76: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 659. push.at:25: ok 660. push.at:84: testing Multiple impure instances ... ./push.at:134: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./push.at:134: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./push.at:134: $PREPARSER ./input stderr: ./push.at:134: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./push.at:135: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./push.at:135: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./push.at:135: $PREPARSER ./input stderr: ./push.at:135: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 660. push.at:84: ok 661. push.at:145: testing Unsupported Skeletons ... ./push.at:156: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret input.y 661. push.at:145: ok 662. push.at:167: testing Pstate reuse ... ./push.at:276: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./push.at:276: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./push.at:277: ./input 662. push.at:167: ok 663. c++.at:26: testing C++ Locations Unit Tests ... ======== Testing with C++ standard flags: '' ./c++.at:92: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:92: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.hh:51, from input.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1769:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at input.cc:1605:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2037:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1769:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1591:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&, const yy::parser::location_type&)' at input.cc:2468:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1769:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1591:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2451:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at input.cc:2439:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./regression.at:1878: $PREPARSER ./input stderr: ./regression.at:1878: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 657. regression.at:1878: ok 664. c++.at:107: testing C++ Variant-based Symbols Unit Tests ... ./c++.at:234: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.yy ======== Testing with C++ standard flags: '' ./c++.at:235: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:92: $PREPARSER ./input stderr: ./c++.at:92: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:92: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:92: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.yy:33: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1037:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1037:24, inlined from 'int main()' at list.yy:132:16: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:235: $PREPARSER ./list stderr: ./c++.at:235: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:235: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS ./existing.at:808: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -Wall --report=all,no-cex --header -o input.c input.y --warnings=none -Werror --trace=none stderr: stdout: ./c++.at:92: $PREPARSER ./input stderr: ./c++.at:92: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:92: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:92: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:92: $PREPARSER ./input stderr: ./c++.at:92: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:92: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:92: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.yy:33: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1037:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:235: $PREPARSER ./list stderr: ./c++.at:235: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:235: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:92: $PREPARSER ./input stderr: ./c++.at:92: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:92: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:92: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.yy:33: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1037:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:235: $PREPARSER ./list stderr: ./c++.at:235: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:235: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:92: $PREPARSER ./input stderr: ./c++.at:92: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:92: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:92: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.yy:33: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1037:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1037:24, inlined from 'int main()' at list.yy:132:16: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:235: $PREPARSER ./list stderr: ./c++.at:235: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:235: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:92: $PREPARSER ./input stderr: ./c++.at:92: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:92: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:92: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS ./existing.at:808: sed -n 's/^State //p' input.output | tail -1 ./existing.at:808: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.yy:33: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1037:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1037:24, inlined from 'int main()' at list.yy:132:16: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:235: $PREPARSER ./list stderr: ./c++.at:235: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:235: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:92: $PREPARSER ./input stderr: ./c++.at:92: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:92: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:92: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./existing.at:808: $PREPARSER ./input stderr: ./existing.at:808: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 621. existing.at:808: ok 665. c++.at:247: testing Multiple occurrences of $n and api.value.automove ... ./c++.at:263: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret input.yy ./c++.at:263: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.yy -Werror stderr: input.yy:16.33-34: error: multiple occurrences of $2 with api.value.automove [-Werror=other] 16 | | "twice" exp { $$ = $2 + $2; } | ^~ input.yy:17.33-36: error: multiple occurrences of $2 with api.value.automove [-Werror=other] 17 | | "thrice" exp[val] { $$ = $2 + $val + $2; } | ^~~~ input.yy:17.40-41: error: multiple occurrences of $2 with api.value.automove [-Werror=other] 17 | | "thrice" exp[val] { $$ = $2 + $val + $2; } | ^~ ./c++.at:263: sed 's,.*/$,,' stderr 1>&2 ./c++.at:263: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.yy --warnings=error ./c++.at:263: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.yy -Wnone,none -Werror --trace=none ./c++.at:263: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret input.yy --warnings=none -Werror --trace=none 665. c++.at:247: ok 666. c++.at:566: testing Variants lalr1.cc ... ======== Testing with C++ standard flags: '' ./c++.at:566: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:566: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.yy:33: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1037:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1037:24, inlined from 'int main()' at list.yy:132:16: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:235: $PREPARSER ./list stderr: ./c++.at:235: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:235: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:92: $PREPARSER ./input stderr: ./c++.at:92: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 663. c++.at:26: ok 667. c++.at:567: testing Variants lalr1.cc parse.assert ... ======== Testing with C++ standard flags: '' ./c++.at:567: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:567: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1156:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:566: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:566: $here/modern stdout: Modern C++: 201703 ./c++.at:566: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:566: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:566: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:566: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:235: $PREPARSER ./list stderr: ./c++.at:235: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:235: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:567: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1156:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:566: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:567: $here/modern stdout: Modern C++: 201703 ./c++.at:567: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:567: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:567: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:567: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:566: $here/modern stdout: Legac++ ./c++.at:566: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:566: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:566: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:566: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:235: $PREPARSER ./list stderr: ./c++.at:235: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 664. c++.at:107: ok 668. c++.at:568: testing Variants lalr1.cc parse.assert api.value.automove ... ======== Testing with C++ standard flags: '' ./c++.at:568: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:568: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1156:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:566: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:567: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:566: $here/modern stdout: Legac++ ./c++.at:566: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:566: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:566: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y stderr: stdout: ./c++.at:567: $here/modern stdout: Legac++ ./c++.at:567: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:567: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:567: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:566: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS ./c++.at:567: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:568: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:568: $here/modern stdout: Modern C++: 201703 ./c++.at:568: $PREPARSER ./list stderr: Destroy: "" Destroy: "" Destroy: 1 Destroy: "" Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: "" Destroy: 3 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: () Destroy: 5 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: (0, 1, 2, 4, 6) ./c++.at:568: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:568: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:568: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1156:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:566: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:567: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:567: $here/modern stdout: Legac++ ./c++.at:567: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:567: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:567: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:567: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:566: $here/modern stdout: Modern C++: 201103 ./c++.at:566: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:566: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:566: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:566: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:568: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:568: $here/modern stdout: Legac++ ./c++.at:568: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:568: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:568: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:568: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1156:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:566: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:567: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:566: $here/modern stdout: Modern C++: 201402 ./c++.at:566: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:566: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:566: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:566: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:567: $here/modern stdout: Modern C++: 201103 ./c++.at:567: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:567: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:567: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:567: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:568: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:568: $here/modern stdout: Legac++ ./c++.at:568: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:568: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:568: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:568: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1156:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:566: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:567: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:566: $here/modern stdout: Modern C++: 201703 ./c++.at:566: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:566: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:566: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:566: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:567: $here/modern stdout: Modern C++: 201402 ./c++.at:567: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:567: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:567: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:567: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:568: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:568: $here/modern stdout: Modern C++: 201103 ./c++.at:568: $PREPARSER ./list stderr: Destroy: "" Destroy: "" Destroy: 1 Destroy: "" Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: "" Destroy: 3 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: () Destroy: 5 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: (0, 1, 2, 4, 6) ./c++.at:568: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:568: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:568: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:567: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:566: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:567: $here/modern stdout: Modern C++: 201703 ./c++.at:567: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:567: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:567: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:567: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:566: $here/modern stdout: Modern C++: 202002 ./c++.at:566: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:566: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:566: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:566: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:568: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:568: $here/modern stdout: Modern C++: 201402 ./c++.at:568: $PREPARSER ./list stderr: Destroy: "" Destroy: "" Destroy: 1 Destroy: "" Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: "" Destroy: 3 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: () Destroy: 5 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: (0, 1, 2, 4, 6) ./c++.at:568: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:568: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:568: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:567: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:566: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:567: $here/modern stdout: Modern C++: 202002 ./c++.at:567: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:567: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:567: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:567: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:566: $here/modern stdout: Modern C++: 202100 ./c++.at:566: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:566: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 666. c++.at:566: ok 669. c++.at:569: testing Variants lalr1.cc parse.assert %locations ... ======== Testing with C++ standard flags: '' ./c++.at:569: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:569: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:568: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:568: $here/modern stdout: Modern C++: 201703 ./c++.at:568: $PREPARSER ./list stderr: Destroy: "" Destroy: "" Destroy: 1 Destroy: "" Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: "" Destroy: 3 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: () Destroy: 5 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: (0, 1, 2, 4, 6) ./c++.at:568: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:568: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:568: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:567: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:569: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:567: $here/modern stdout: Modern C++: 202100 ./c++.at:567: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:567: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 667. c++.at:567: ok 670. c++.at:570: testing Variants lalr1.cc parse.assert %code {\n#define TWO_STAGE_BUILD\n} ... ======== Testing with C++ standard flags: '' ./c++.at:570: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:570: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:569: $here/modern stdout: Modern C++: 201703 ./c++.at:569: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:569: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:569: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:569: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:568: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:568: $here/modern stdout: Modern C++: 202002 ./c++.at:568: $PREPARSER ./list stderr: Destroy: "" Destroy: "" Destroy: 1 Destroy: "" Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: "" Destroy: 3 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: () Destroy: 5 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: (0, 1, 2, 4, 6) ./c++.at:568: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:568: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:568: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:19: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:570: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:569: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:569: $here/modern stdout: Legac++ ./c++.at:569: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:569: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:569: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:569: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:570: $here/modern stdout: Modern C++: 201703 ./c++.at:570: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:570: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:570: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:570: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:568: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:568: $here/modern stdout: Modern C++: 202100 ./c++.at:568: $PREPARSER ./list stderr: Destroy: "" Destroy: "" Destroy: 1 Destroy: "" Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: "" Destroy: 3 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: () Destroy: 5 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: (0, 1, 2, 4, 6) ./c++.at:568: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 668. c++.at:568: ok 671. c++.at:571: testing Variants lalr1.cc parse.assert api.token.constructor ... ======== Testing with C++ standard flags: '' ./c++.at:571: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:571: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:19: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:570: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:569: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:570: $here/modern stdout: Legac++ ./c++.at:570: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:570: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:570: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y stderr: stdout: ./c++.at:569: $here/modern stdout: Legac++ ./c++.at:569: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:569: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:569: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:570: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS ./c++.at:569: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:571: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:571: $here/modern stdout: Modern C++: 201703 ./c++.at:571: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:571: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:571: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:571: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:19: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:570: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:569: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:570: $here/modern stdout: Legac++ ./c++.at:570: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:570: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:570: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:570: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:569: $here/modern stdout: Modern C++: 201103 ./c++.at:569: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:569: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:569: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:569: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at list.cc:1812:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:571: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:571: $here/modern stdout: Legac++ ./c++.at:571: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:571: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:571: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:571: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:19: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:570: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:570: $here/modern stdout: Modern C++: 201103 ./c++.at:570: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:570: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:570: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:569: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS ./c++.at:570: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:569: $here/modern stdout: Modern C++: 201402 ./c++.at:569: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:569: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:569: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:569: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at list.cc:1812:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:571: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:571: $here/modern stdout: Legac++ ./c++.at:571: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:571: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:571: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:571: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:19: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:570: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:570: $here/modern stdout: Modern C++: 201402 ./c++.at:570: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:570: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:570: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:569: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS ./c++.at:570: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:569: $here/modern stdout: Modern C++: 201703 ./c++.at:569: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:569: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:569: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:569: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:571: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:571: $here/modern stdout: Modern C++: 201103 ./c++.at:571: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:571: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:571: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:571: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:19: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:570: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:570: $here/modern stdout: Modern C++: 201703 ./c++.at:570: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:570: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:570: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:570: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:569: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:571: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:569: $here/modern stdout: Modern C++: 202002 ./c++.at:569: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:569: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:569: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y stderr: stdout: ./c++.at:571: $here/modern stdout: Modern C++: 201402 ./c++.at:571: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:571: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:571: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:569: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS ./c++.at:571: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:570: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:570: $here/modern stdout: Modern C++: 202002 ./c++.at:570: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:570: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:570: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:570: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:571: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:569: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:571: $here/modern stdout: Modern C++: 201703 ./c++.at:571: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:571: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:571: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:571: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:569: $here/modern stdout: Modern C++: 202100 ./c++.at:569: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:569: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 669. c++.at:569: ok 672. c++.at:572: testing Variants lalr1.cc parse.assert api.token.constructor api.token.prefix={TOK_} ... ======== Testing with C++ standard flags: '' ./c++.at:572: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:572: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:570: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:570: $here/modern stdout: Modern C++: 202100 ./c++.at:570: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:570: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 670. c++.at:570: ok 673. c++.at:573: testing Variants lalr1.cc parse.assert api.token.constructor api.token.prefix={TOK_} %locations ... ======== Testing with C++ standard flags: '' ./c++.at:573: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y stderr: stdout: ./c++.at:571: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS ./c++.at:573: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:572: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:571: $here/modern stdout: Modern C++: 202002 ./c++.at:571: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:571: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:571: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:571: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:572: $here/modern stdout: Modern C++: 201703 ./c++.at:572: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:572: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:572: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:572: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:573: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:573: $here/modern stdout: Modern C++: 201703 ./c++.at:573: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:573: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:573: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at list.cc:1812:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:572: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS ./c++.at:573: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:572: $here/modern stdout: Legac++ ./c++.at:572: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:572: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:572: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y stderr: stdout: ./c++.at:571: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS ./c++.at:572: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:571: $here/modern stdout: Modern C++: 202100 ./c++.at:571: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:571: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 671. c++.at:571: ok 674. c++.at:574: testing Variants lalr1.cc parse.assert api.token.constructor api.token.prefix={TOK_} %locations api.value.automove ... ======== Testing with C++ standard flags: '' ./c++.at:574: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:574: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at list.cc:2097:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:573: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:573: $here/modern stdout: Legac++ ./c++.at:573: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:573: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:573: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at list.cc:1812:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:572: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS ./c++.at:573: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:572: $here/modern stdout: Legac++ ./c++.at:572: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:572: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:572: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:572: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:574: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:574: $here/modern stdout: Modern C++: 201703 ./c++.at:574: $PREPARSER ./list stderr: Destroy: "" Destroy: "" Destroy: 1 Destroy: "" Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: "" Destroy: 3 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: () Destroy: 5 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: (0, 1, 2, 4, 6) ./c++.at:574: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:574: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:574: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at list.cc:2097:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:573: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:572: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:573: $here/modern stdout: Legac++ ./c++.at:573: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:573: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:573: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:573: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:572: $here/modern stdout: Modern C++: 201103 ./c++.at:572: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:572: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:572: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:572: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at list.cc:2097:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:574: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:574: $here/modern stdout: Legac++ ./c++.at:574: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:574: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:574: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:574: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:573: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:572: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:573: $here/modern stdout: Modern C++: 201103 ./c++.at:573: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:573: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:573: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:573: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:572: $here/modern stdout: Modern C++: 201402 ./c++.at:572: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:572: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:572: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:572: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at list.cc:2097:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:574: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:574: $here/modern stdout: Legac++ ./c++.at:574: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:574: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:574: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:574: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:573: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1201:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:572: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:573: $here/modern stdout: Modern C++: 201402 ./c++.at:573: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:573: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:573: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:573: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:572: $here/modern stdout: Modern C++: 201703 ./c++.at:572: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:572: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:572: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:572: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:574: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:574: $here/modern stdout: Modern C++: 201103 ./c++.at:574: $PREPARSER ./list stderr: Destroy: "" Destroy: "" Destroy: 1 Destroy: "" Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: "" Destroy: 3 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: () Destroy: 5 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: (0, 1, 2, 4, 6) ./c++.at:574: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:574: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:574: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:573: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:573: $here/modern stdout: Modern C++: 201703 ./c++.at:573: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:573: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:573: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:573: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:572: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:574: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:572: $here/modern stdout: Modern C++: 202002 ./c++.at:572: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:572: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:572: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:572: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:574: $here/modern stdout: Modern C++: 201402 ./c++.at:574: $PREPARSER ./list stderr: Destroy: "" Destroy: "" Destroy: 1 Destroy: "" Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: "" Destroy: 3 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: () Destroy: 5 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: (0, 1, 2, 4, 6) ./c++.at:574: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:574: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:574: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:573: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:573: $here/modern stdout: Modern C++: 202002 ./c++.at:573: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:573: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:573: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y ./c++.at:573: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: stdout: ./c++.at:572: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from list.y:17: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at list.cc:1462:24: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:574: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:574: $here/modern stdout: Modern C++: 201703 ./c++.at:574: $PREPARSER ./list stderr: Destroy: "" Destroy: "" Destroy: 1 Destroy: "" Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: "" Destroy: 3 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: () Destroy: 5 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: (0, 1, 2, 4, 6) ./c++.at:574: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:574: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y stderr: stdout: ./c++.at:572: $here/modern ./c++.at:574: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stdout: Modern C++: 202100 ./c++.at:572: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:572: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 672. c++.at:572: ok 675. c++.at:584: testing Variants and Typed Midrule Actions ... ======== Testing with C++ standard flags: '' ./c++.at:659: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:659: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.hh:53, from input.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.hh:1048:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:335:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.hh:1048:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:335:19, inlined from 'virtual int yy::parser::parse()' at input.cc:607:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:659: $PREPARSER ./input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token NUMBER (1) Shifting token NUMBER (1) Entering state 1 Stack now 0 1 Reducing stack by rule 1 (line 34): $1 = token NUMBER (1) -> $$ = nterm expr (10) destroy: 1 Entering state 2 Stack now 0 2 Reading a token Next token is token NUMBER (30) Reducing stack by rule 2 (line 35): -> $$ = nterm @1 (20) Entering state 4 Stack now 0 2 4 Next token is token NUMBER (30) Shifting token NUMBER (30) Entering state 5 Stack now 0 2 4 5 Reducing stack by rule 3 (line 35): $1 = nterm expr (10) $2 = nterm @1 (20) $3 = token NUMBER (30) expr: 10 20 30 -> $$ = nterm expr (40) destroy: 30 destroy: 20 destroy: 10 Entering state 2 Stack now 0 2 Reading a token Next token is token EOI () Shifting token EOI () Entering state 3 Stack now 0 2 3 Stack now 0 2 3 Cleanup: popping token EOI () Cleanup: popping nterm expr (40) destroy: 40 ./c++.at:659: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:659: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:659: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:573: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:574: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS stderr: stdout: ./c++.at:573: $here/modern stdout: Modern C++: 202100 ./c++.at:573: $PREPARSER ./list stderr: Destroy: "0" Destroy: "0" Destroy: 1 Destroy: "1" Destroy: (0) Destroy: "2" Destroy: "2" Destroy: (0, 1) Destroy: "" Destroy: 3 Destroy: (0, 1, 2) Destroy: "4" Destroy: "4" Destroy: (0, 1, 2) Destroy: (0, 1, 2, 4) Destroy: 5 Destroy: (0, 1, 2, 4) Destroy: "6" Destroy: "6" Destroy: (0, 1, 2, 4) Destroy: (0, 1, 2, 4, 6) ./c++.at:573: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 673. c++.at:573: ok 676. c++.at:794: testing Doxygen Public Documentation ... ./c++.at:794: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy ./c++.at:794: doxygen --version || exit 77 --- /dev/null 2023-05-24 22:25:30.000000000 -1200 +++ /build/bison-3.8.2+dfsg/tests/testsuite.dir/at-groups/676/stderr 2023-05-27 04:34:19.561510776 -1200 @@ -0,0 +1 @@ +/build/bison-3.8.2+dfsg/tests/testsuite.dir/at-groups/676/test-source: line 180: doxygen: command not found stdout: 676. c++.at:794: skipped (c++.at:794) 677. c++.at:795: testing Doxygen Private Documentation ... ./c++.at:795: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy ./c++.at:795: doxygen --version || exit 77 --- /dev/null 2023-05-24 22:25:30.000000000 -1200 +++ /build/bison-3.8.2+dfsg/tests/testsuite.dir/at-groups/677/stderr 2023-05-27 04:34:21.165481461 -1200 @@ -0,0 +1 @@ +/build/bison-3.8.2+dfsg/tests/testsuite.dir/at-groups/677/test-source: line 180: doxygen: command not found stdout: 677. c++.at:795: skipped (c++.at:795) stderr: stdout: ./c++.at:574: $here/modern stdout: Modern C++: 202002 ./c++.at:574: $PREPARSER ./list stderr: Destroy: "" Destroy: "" Destroy: 1 Destroy: "" Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: "" Destroy: 3 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: () Destroy: 5 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: (0, 1, 2, 4, 6) ./c++.at:574: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:574: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o list.cc list.y 678. c++.at:848: testing Relative namespace references ... ./c++.at:849: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy ======== Testing with C++ standard flags: '' ./c++.at:849: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS ./c++.at:574: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o list list.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.hh:53, from input.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.hh:1048:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at input.cc:335:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.hh:1048:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at input.cc:335:19, inlined from 'virtual int yy::parser::parse()' at input.cc:607:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:659: $PREPARSER ./input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token NUMBER (1) Shifting token NUMBER (1) Entering state 1 Stack now 0 1 Reducing stack by rule 1 (line 34): $1 = token NUMBER (1) -> $$ = nterm expr (10) destroy: 1 Entering state 2 Stack now 0 2 Reading a token Next token is token NUMBER (30) Reducing stack by rule 2 (line 35): -> $$ = nterm @1 (20) Entering state 4 Stack now 0 2 4 Next token is token NUMBER (30) Shifting token NUMBER (30) Entering state 5 Stack now 0 2 4 5 Reducing stack by rule 3 (line 35): $1 = nterm expr (10) $2 = nterm @1 (20) $3 = token NUMBER (30) expr: 10 20 30 -> $$ = nterm expr (40) destroy: 30 destroy: 20 destroy: 10 Entering state 2 Stack now 0 2 Reading a token Next token is token EOI () Shifting token EOI () Entering state 3 Stack now 0 2 3 Stack now 0 2 3 Cleanup: popping token EOI () Cleanup: popping nterm expr (40) destroy: 40 ./c++.at:659: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:659: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:659: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:849: $PREPARSER ./input stderr: ./c++.at:849: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:849: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.hh:53, from input.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.hh:1048:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at input.cc:335:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.hh:1048:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at input.cc:335:19, inlined from 'virtual int yy::parser::parse()' at input.cc:607:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:659: $PREPARSER ./input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token NUMBER (1) Shifting token NUMBER (1) Entering state 1 Stack now 0 1 Reducing stack by rule 1 (line 34): $1 = token NUMBER (1) -> $$ = nterm expr (10) destroy: 1 Entering state 2 Stack now 0 2 Reading a token Next token is token NUMBER (30) Reducing stack by rule 2 (line 35): -> $$ = nterm @1 (20) Entering state 4 Stack now 0 2 4 Next token is token NUMBER (30) Shifting token NUMBER (30) Entering state 5 Stack now 0 2 4 5 Reducing stack by rule 3 (line 35): $1 = nterm expr (10) $2 = nterm @1 (20) $3 = token NUMBER (30) expr: 10 20 30 -> $$ = nterm expr (40) destroy: 30 destroy: 20 destroy: 10 Entering state 2 Stack now 0 2 Reading a token Next token is token EOI () Shifting token EOI () Entering state 3 Stack now 0 2 3 Stack now 0 2 3 Cleanup: popping token EOI () Cleanup: popping nterm expr (40) destroy: 40 ./c++.at:659: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:659: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:659: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:849: $PREPARSER ./input stderr: ./c++.at:849: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:849: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:849: $PREPARSER ./input stderr: stdout: stderr: ./c++.at:574: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o modern modern.cc $LIBS ./c++.at:849: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:849: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.hh:53, from input.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.hh:1048:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:335:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.hh:1048:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:335:19, inlined from 'virtual int yy::parser::parse()' at input.cc:607:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:659: $PREPARSER ./input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token NUMBER (1) Shifting token NUMBER (1) Entering state 1 Stack now 0 1 Reducing stack by rule 1 (line 34): $1 = token NUMBER (1) -> $$ = nterm expr (10) destroy: 1 Entering state 2 Stack now 0 2 Reading a token Next token is token NUMBER (30) Reducing stack by rule 2 (line 35): -> $$ = nterm @1 (20) Entering state 4 Stack now 0 2 4 Next token is token NUMBER (30) Shifting token NUMBER (30) Entering state 5 Stack now 0 2 4 5 Reducing stack by rule 3 (line 35): $1 = nterm expr (10) $2 = nterm @1 (20) $3 = token NUMBER (30) expr: 10 20 30 -> $$ = nterm expr (40) destroy: 30 destroy: 20 destroy: 10 Entering state 2 Stack now 0 2 Reading a token Next token is token EOI () Shifting token EOI () Entering state 3 Stack now 0 2 3 Stack now 0 2 3 Cleanup: popping token EOI () Cleanup: popping nterm expr (40) destroy: 40 ./c++.at:659: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:659: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:659: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:574: $here/modern stdout: Modern C++: 202100 ./c++.at:574: $PREPARSER ./list stderr: Destroy: "" Destroy: "" Destroy: 1 Destroy: "" Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: "" Destroy: 3 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: () Destroy: 5 Destroy: () Destroy: "" Destroy: "" Destroy: () Destroy: (0, 1, 2, 4, 6) ./c++.at:574: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 674. c++.at:574: ok 679. c++.at:854: testing Absolute namespace references ... ./c++.at:855: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy ======== Testing with C++ standard flags: '' ./c++.at:855: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:849: $PREPARSER ./input stderr: ./c++.at:849: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:849: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.hh:53, from input.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.hh:1048:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:335:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.hh:1048:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:335:19, inlined from 'virtual int yy::parser::parse()' at input.cc:607:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:659: $PREPARSER ./input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token NUMBER (1) Shifting token NUMBER (1) Entering state 1 Stack now 0 1 Reducing stack by rule 1 (line 34): $1 = token NUMBER (1) -> $$ = nterm expr (10) destroy: 1 Entering state 2 Stack now 0 2 Reading a token Next token is token NUMBER (30) Reducing stack by rule 2 (line 35): -> $$ = nterm @1 (20) Entering state 4 Stack now 0 2 4 Next token is token NUMBER (30) Shifting token NUMBER (30) Entering state 5 Stack now 0 2 4 5 Reducing stack by rule 3 (line 35): $1 = nterm expr (10) $2 = nterm @1 (20) $3 = token NUMBER (30) expr: 10 20 30 -> $$ = nterm expr (40) destroy: 30 destroy: 20 destroy: 10 Entering state 2 Stack now 0 2 Reading a token Next token is token EOI () Shifting token EOI () Entering state 3 Stack now 0 2 3 Stack now 0 2 3 Cleanup: popping token EOI () Cleanup: popping nterm expr (40) destroy: 40 ./c++.at:659: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:659: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y stderr: stdout: ./c++.at:855: $PREPARSER ./input stderr: ./c++.at:855: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:855: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS ./c++.at:659: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:849: $PREPARSER ./input stderr: ./c++.at:849: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:849: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:855: $PREPARSER ./input stderr: ./c++.at:855: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:855: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:849: $PREPARSER ./input stderr: ./c++.at:849: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:849: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:855: $PREPARSER ./input stderr: ./c++.at:855: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' stderr: ./c++.at:855: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS In file included from /usr/include/c++/12/vector:70, from input.hh:53, from input.cc:51: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.hh:1048:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:335:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.hh:1048:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:335:19, inlined from 'virtual int yy::parser::parse()' at input.cc:607:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:659: $PREPARSER ./input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token NUMBER (1) Shifting token NUMBER (1) Entering state 1 Stack now 0 1 Reducing stack by rule 1 (line 34): $1 = token NUMBER (1) -> $$ = nterm expr (10) destroy: 1 Entering state 2 Stack now 0 2 Reading a token Next token is token NUMBER (30) Reducing stack by rule 2 (line 35): -> $$ = nterm @1 (20) Entering state 4 Stack now 0 2 4 Next token is token NUMBER (30) Shifting token NUMBER (30) Entering state 5 Stack now 0 2 4 5 Reducing stack by rule 3 (line 35): $1 = nterm expr (10) $2 = nterm @1 (20) $3 = token NUMBER (30) expr: 10 20 30 -> $$ = nterm expr (40) destroy: 30 destroy: 20 destroy: 10 Entering state 2 Stack now 0 2 Reading a token Next token is token EOI () Shifting token EOI () Entering state 3 Stack now 0 2 3 Stack now 0 2 3 Cleanup: popping token EOI () Cleanup: popping nterm expr (40) destroy: 40 ./c++.at:659: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:659: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:659: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:849: $PREPARSER ./input stderr: ./c++.at:849: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:849: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:855: $PREPARSER ./input stderr: ./c++.at:855: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:855: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:659: $PREPARSER ./input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token NUMBER (1) Shifting token NUMBER (1) Entering state 1 Stack now 0 1 Reducing stack by rule 1 (line 34): $1 = token NUMBER (1) -> $$ = nterm expr (10) destroy: 1 Entering state 2 Stack now 0 2 Reading a token Next token is token NUMBER (30) Reducing stack by rule 2 (line 35): -> $$ = nterm @1 (20) Entering state 4 Stack now 0 2 4 Next token is token NUMBER (30) Shifting token NUMBER (30) Entering state 5 Stack now 0 2 4 5 Reducing stack by rule 3 (line 35): $1 = nterm expr (10) $2 = nterm @1 (20) $3 = token NUMBER (30) expr: 10 20 30 -> $$ = nterm expr (40) destroy: 30 destroy: 20 destroy: 10 Entering state 2 Stack now 0 2 Reading a token Next token is token EOI () Shifting token EOI () Entering state 3 Stack now 0 2 3 Stack now 0 2 3 Cleanup: popping token EOI () Cleanup: popping nterm expr (40) destroy: 40 ./c++.at:659: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:659: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:659: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:855: $PREPARSER ./input stderr: ./c++.at:855: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:855: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:849: $PREPARSER ./input stderr: ./c++.at:849: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:850: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy ======== Testing with C++ standard flags: '' ./c++.at:850: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:855: $PREPARSER ./input stderr: ./c++.at:855: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:855: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:850: $PREPARSER ./input stderr: ./c++.at:850: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:850: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:659: $PREPARSER ./input stderr: Starting parse Entering state 0 Stack now 0 Reading a token Next token is token NUMBER (1) Shifting token NUMBER (1) Entering state 1 Stack now 0 1 Reducing stack by rule 1 (line 34): $1 = token NUMBER (1) -> $$ = nterm expr (10) destroy: 1 Entering state 2 Stack now 0 2 Reading a token Next token is token NUMBER (30) Reducing stack by rule 2 (line 35): -> $$ = nterm @1 (20) Entering state 4 Stack now 0 2 4 Next token is token NUMBER (30) Shifting token NUMBER (30) Entering state 5 Stack now 0 2 4 5 Reducing stack by rule 3 (line 35): $1 = nterm expr (10) $2 = nterm @1 (20) $3 = token NUMBER (30) expr: 10 20 30 -> $$ = nterm expr (40) destroy: 30 destroy: 20 destroy: 10 Entering state 2 Stack now 0 2 Reading a token Next token is token EOI () Shifting token EOI () Entering state 3 Stack now 0 2 3 Stack now 0 2 3 Cleanup: popping token EOI () Cleanup: popping nterm expr (40) destroy: 40 ./c++.at:659: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 675. c++.at:584: ok 680. c++.at:863: testing Syntactically invalid namespace references ... ./c++.at:864: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy ./c++.at:865: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy ./c++.at:868: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy ./c++.at:869: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy ./c++.at:870: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy 680. c++.at:863: ok 681. c++.at:884: testing Syntax error discarding no lookahead ... ======== Testing with C++ standard flags: '' ./c++.at:941: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:941: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:850: $PREPARSER ./input stderr: ./c++.at:850: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:850: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:855: $PREPARSER ./input stderr: ./c++.at:855: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:855: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:850: $PREPARSER ./input stderr: ./c++.at:850: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:850: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:941: $PREPARSER ./input stderr: syntax error Discarding 'a'. Reducing 'a'. ./c++.at:941: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:941: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:941: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:855: $PREPARSER ./input stderr: ./c++.at:855: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:856: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy ======== Testing with C++ standard flags: '' ./c++.at:856: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:850: $PREPARSER ./input stderr: ./c++.at:850: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:850: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:941: $PREPARSER ./input stderr: syntax error Discarding 'a'. Reducing 'a'. ./c++.at:941: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:941: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:941: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:856: $PREPARSER ./input stderr: ./c++.at:856: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:856: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:850: $PREPARSER ./input stderr: ./c++.at:850: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:850: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:941: $PREPARSER ./input stderr: syntax error Discarding 'a'. Reducing 'a'. ./c++.at:941: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:941: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:941: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:856: $PREPARSER ./input stderr: ./c++.at:856: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:856: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:856: $PREPARSER ./input stderr: ./c++.at:856: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:856: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:941: $PREPARSER ./input stderr: syntax error Discarding 'a'. Reducing 'a'. ./c++.at:941: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:941: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y stderr: stdout: ./c++.at:850: $PREPARSER ./input stderr: ./c++.at:850: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:850: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS ./c++.at:941: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:856: $PREPARSER ./input stderr: ./c++.at:856: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:856: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:941: $PREPARSER ./input stderr: syntax error Discarding 'a'. Reducing 'a'. ./c++.at:941: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:941: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:941: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:850: $PREPARSER ./input stderr: ./c++.at:850: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:850: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:856: $PREPARSER ./input stderr: ./c++.at:856: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:856: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:941: $PREPARSER ./input stderr: syntax error Discarding 'a'. Reducing 'a'. ./c++.at:941: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:941: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:941: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:850: $PREPARSER ./input stderr: ./c++.at:850: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:851: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy ======== Testing with C++ standard flags: '' ./c++.at:851: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:856: $PREPARSER ./input stderr: ./c++.at:856: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:856: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:851: $PREPARSER ./input stderr: ./c++.at:851: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:851: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:941: $PREPARSER ./input stderr: syntax error Discarding 'a'. Reducing 'a'. ./c++.at:941: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:941: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./c++.at:941: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:856: $PREPARSER ./input stderr: ./c++.at:856: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:856: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:851: $PREPARSER ./input stderr: ./c++.at:851: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:851: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:851: $PREPARSER ./input stderr: ./c++.at:851: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:851: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:941: $PREPARSER ./input stderr: syntax error Discarding 'a'. Reducing 'a'. ./c++.at:941: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 681. c++.at:884: ok 682. c++.at:1064: testing Syntax error as exception: lalr1.cc ... ./c++.at:1064: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy ======== Testing with C++ standard flags: '' ./c++.at:1064: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:856: $PREPARSER ./input stderr: ./c++.at:856: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:857: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy ======== Testing with C++ standard flags: '' ./c++.at:857: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:851: $PREPARSER ./input stderr: ./c++.at:851: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:851: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:857: $PREPARSER ./input stderr: ./c++.at:857: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:857: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:851: $PREPARSER ./input stderr: ./c++.at:851: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:851: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1064: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:857: $PREPARSER ./input stderr: ./c++.at:857: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:857: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:851: $PREPARSER ./input stderr: ./c++.at:851: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:851: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1064: $PREPARSER ./input < in stderr: stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stdout: ./c++.at:857: $PREPARSER ./input ./c++.at:1064: $PREPARSER ./input < in stderr: ./c++.at:857: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: error: invalid expression ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:857: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1064: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:851: $PREPARSER ./input stderr: ./c++.at:851: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:851: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:857: $PREPARSER ./input stderr: ./c++.at:857: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:857: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1064: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:851: $PREPARSER ./input stderr: ./c++.at:851: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 678. c++.at:848: ok stderr: stdout: ./c++.at:857: $PREPARSER ./input stderr: ./c++.at:857: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:857: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS 683. c++.at:1065: testing Syntax error as exception: glr.cc ... ./c++.at:1065: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy ======== Testing with C++ standard flags: '' ./c++.at:1065: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1064: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:857: $PREPARSER ./input stderr: ./c++.at:857: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:857: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1065: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:857: $PREPARSER ./input stderr: ./c++.at:857: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:857: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1064: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1065: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:857: $PREPARSER ./input stderr: ./c++.at:857: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:858: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy ======== Testing with C++ standard flags: '' ./c++.at:858: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:858: $PREPARSER ./input stderr: ./c++.at:858: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:858: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1064: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1065: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:858: $PREPARSER ./input stderr: ./c++.at:858: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:858: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:858: $PREPARSER ./input stderr: ./c++.at:858: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:858: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1065: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1064: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:858: $PREPARSER ./input stderr: ./c++.at:858: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:858: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:858: $PREPARSER ./input stderr: ./c++.at:858: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:858: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1065: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:858: $PREPARSER ./input stderr: ./c++.at:858: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:858: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1064: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1064: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 682. c++.at:1064: ok 684. c++.at:1066: testing Syntax error as exception: glr2.cc ... ./c++.at:1066: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy ======== Testing with C++ standard flags: '' ./c++.at:1066: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./c++.at:1066: ./check ./c++.at:1066: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1065: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:858: $PREPARSER ./input stderr: ./c++.at:858: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:858: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:858: $PREPARSER ./input stderr: ./c++.at:858: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:859: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy ======== Testing with C++ standard flags: '' ./c++.at:859: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1065: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stderr: stdout: In file included from /usr/include/c++/12/vector:70, from input.hh:51, from input.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1741:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at input.cc:1580:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2006:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1741:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1566:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at input.cc:2431:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1741:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1566:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2414:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at input.cc:2402:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:859: $PREPARSER ./input ./c++.at:1066: $PREPARSER ./input < in stderr: stderr: ./c++.at:859: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr error: invalid expression caught error error: invalid character caught error ./c++.at:1066: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1066: $PREPARSER ./input < in ./c++.at:859: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: error: invalid expression ./c++.at:1066: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1066: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1066: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1066: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./c++.at:1066: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./c++.at:1066: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./c++.at:1066: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./c++.at:1066: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./c++.at:1066: ./check ./c++.at:1066: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:859: $PREPARSER ./input stderr: ./c++.at:859: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:859: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:859: $PREPARSER ./input stderr: ./c++.at:859: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:859: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1065: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1065: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 683. c++.at:1065: ok 685. c++.at:1360: testing Exception safety with error recovery ... ./c++.at:1360: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o exceptions exceptions.cc $LIBS stderr: stdout: ./c++.at:859: $PREPARSER ./input stderr: ./c++.at:859: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:859: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1360: ./exceptions || exit 77 stderr: Inner caught Outer caught ./c++.at:1360: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc --report=all input.yy ======== Testing with C++ standard flags: '' ./c++.at:1360: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:859: $PREPARSER ./input stderr: ./c++.at:859: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:859: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.hh:51, from input.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1741:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at input.cc:1580:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2006:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1741:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1566:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at input.cc:2431:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1741:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1566:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2414:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at input.cc:2402:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:1066: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1066: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1066: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1066: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1066: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1066: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1066: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./c++.at:1066: ./check ./c++.at:1066: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:1360: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input i stderr: stderr: exception caught: initial-action stdout: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:859: $PREPARSER ./input stderr: ./c++.at:859: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaap stderr: ======== Testing with C++ standard flags: '' ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:859: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS ./c++.at:1360: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x1dc86e0->Object::Object { } Next token is token 'a' (0x1dc86e0 'a') Shifting token 'a' (0x1dc86e0 'a') Entering state 2 Stack now 0 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1dc86e0 'a') -> $$ = nterm item (0x1dc86e0 'a') Entering state 11 Stack now 0 11 Reading a token 0x1dc8708->Object::Object { 0x1dc86e0 } Next token is token 'a' (0x1dc8708 'a') Shifting token 'a' (0x1dc8708 'a') Entering state 2 Stack now 0 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1dc8708 'a') -> $$ = nterm item (0x1dc8708 'a') Entering state 11 Stack now 0 11 11 Reading a token 0x1dc8730->Object::Object { 0x1dc86e0, 0x1dc8708 } Next token is token 'a' (0x1dc8730 'a') Shifting token 'a' (0x1dc8730 'a') Entering state 2 Stack now 0 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1dc8730 'a') -> $$ = nterm item (0x1dc8730 'a') Entering state 11 Stack now 0 11 11 11 Reading a token 0x1dc8758->Object::Object { 0x1dc86e0, 0x1dc8708, 0x1dc8730 } Next token is token 'a' (0x1dc8758 'a') Shifting token 'a' (0x1dc8758 'a') Entering state 2 Stack now 0 11 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1dc8758 'a') -> $$ = nterm item (0x1dc8758 'a') Entering state 11 Stack now 0 11 11 11 11 Reading a token 0x1dc8780->Object::Object { 0x1dc86e0, 0x1dc8708, 0x1dc8730, 0x1dc8758 } Next token is token 'p' (0x1dc8780 'p'Exception caught: cleaning lookahead and stack 0x1dc8780->Object::~Object { 0x1dc86e0, 0x1dc8708, 0x1dc8730, 0x1dc8758, 0x1dc8780 } 0x1dc8758->Object::~Object { 0x1dc86e0, 0x1dc8708, 0x1dc8730, 0x1dc8758 } 0x1dc8730->Object::~Object { 0x1dc86e0, 0x1dc8708, 0x1dc8730 } 0x1dc8708->Object::~Object { 0x1dc86e0, 0x1dc8708 } 0x1dc86e0->Object::~Object { 0x1dc86e0 } exception caught: printer end { } ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x1dc86e0->Object::Object { } Next token is token 'a' (0x1dc86e0 'a') Shifting token 'a' (0x1dc86e0 'a') Entering state 2 Stack now 0 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1dc86e0 'a') -> $$ = nterm item (0x1dc86e0 'a') Entering state 11 Stack now 0 11 Reading a token 0x1dc8708->Object::Object { 0x1dc86e0 } Next token is token 'a' (0x1dc8708 'a') Shifting token 'a' (0x1dc8708 'a') Entering state 2 Stack now 0 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1dc8708 'a') -> $$ = nterm item (0x1dc8708 'a') Entering state 11 Stack now 0 11 11 Reading a token 0x1dc8730->Object::Object { 0x1dc86e0, 0x1dc8708 } Next token is token 'a' (0x1dc8730 'a') Shifting token 'a' (0x1dc8730 'a') Entering state 2 Stack now 0 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1dc8730 'a') -> $$ = nterm item (0x1dc8730 'a') Entering state 11 Stack now 0 11 11 11 Reading a token 0x1dc8758->Object::Object { 0x1dc86e0, 0x1dc8708, 0x1dc8730 } Next token is token 'a' (0x1dc8758 'a') Shifting token 'a' (0x1dc8758 'a') Entering state 2 Stack now 0 11 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1dc8758 'a') -> $$ = nterm item (0x1dc8758 'a') Entering state 11 Stack now 0 11 11 11 11 Reading a token 0x1dc8780->Object::Object { 0x1dc86e0, 0x1dc8708, 0x1dc8730, 0x1dc8758 } Next token is token 'p' (0x1dc8780 'p'Exception caught: cleaning lookahead and stack 0x1dc8780->Object::~Object { 0x1dc86e0, 0x1dc8708, 0x1dc8730, 0x1dc8758, 0x1dc8780 } 0x1dc8758->Object::~Object { 0x1dc86e0, 0x1dc8708, 0x1dc8730, 0x1dc8758 } 0x1dc8730->Object::~Object { 0x1dc86e0, 0x1dc8708, 0x1dc8730 } 0x1dc8708->Object::~Object { 0x1dc86e0, 0x1dc8708 } 0x1dc86e0->Object::~Object { 0x1dc86e0 } exception caught: printer end { } ./c++.at:1360: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1360: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaT stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaR stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1360: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:859: $PREPARSER ./input stderr: ./c++.at:859: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:859: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1360: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaap stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xb9d6e0->Object::Object { } Next token is token 'a' (0xb9d6e0 'a') Shifting token 'a' (0xb9d6e0 'a') Entering state 2 Stack now 0 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xb9d6e0 'a') -> $$ = nterm item (0xb9d6e0 'a') Entering state 11 Stack now 0 11 Reading a token 0xb9d708->Object::Object { 0xb9d6e0 } Next token is token 'a' (0xb9d708 'a') Shifting token 'a' (0xb9d708 'a') Entering state 2 Stack now 0 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xb9d708 'a') -> $$ = nterm item (0xb9d708 'a') Entering state 11 Stack now 0 11 11 Reading a token 0xb9d730->Object::Object { 0xb9d6e0, 0xb9d708 } Next token is token 'a' (0xb9d730 'a') Shifting token 'a' (0xb9d730 'a') Entering state 2 Stack now 0 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xb9d730 'a') -> $$ = nterm item (0xb9d730 'a') Entering state 11 Stack now 0 11 11 11 Reading a token 0xb9d758->Object::Object { 0xb9d6e0, 0xb9d708, 0xb9d730 } Next token is token 'a' (0xb9d758 'a') Shifting token 'a' (0xb9d758 'a') Entering state 2 Stack now 0 11 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xb9d758 'a') -> $$ = nterm item (0xb9d758 'a') Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xb9d780->Object::Object { 0xb9d6e0, 0xb9d708, 0xb9d730, 0xb9d758 } Next token is token 'p' (0xb9d780 'p'Exception caught: cleaning lookahead and stack 0xb9d780->Object::~Object { 0xb9d6e0, 0xb9d708, 0xb9d730, 0xb9d758, 0xb9d780 } 0xb9d758->Object::~Object { 0xb9d6e0, 0xb9d708, 0xb9d730, 0xb9d758 } 0xb9d730->Object::~Object { 0xb9d6e0, 0xb9d708, 0xb9d730 } 0xb9d708->Object::~Object { 0xb9d6e0, 0xb9d708 } 0xb9d6e0->Object::~Object { 0xb9d6e0 } exception caught: printer end { } ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xb9d6e0->Object::Object { } Next token is token 'a' (0xb9d6e0 'a') Shifting token 'a' (0xb9d6e0 'a') Entering state 2 Stack now 0 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xb9d6e0 'a') -> $$ = nterm item (0xb9d6e0 'a') Entering state 11 Stack now 0 11 Reading a token 0xb9d708->Object::Object { 0xb9d6e0 } Next token is token 'a' (0xb9d708 'a') Shifting token 'a' (0xb9d708 'a') Entering state 2 Stack now 0 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xb9d708 'a') -> $$ = nterm item (0xb9d708 'a') Entering state 11 Stack now 0 11 11 Reading a token 0xb9d730->Object::Object { 0xb9d6e0, 0xb9d708 } Next token is token 'a' (0xb9d730 'a') Shifting token 'a' (0xb9d730 'a') Entering state 2 Stack now 0 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xb9d730 'a') -> $$ = nterm item (0xb9d730 'a') Entering state 11 Stack now 0 11 11 11 Reading a token 0xb9d758->Object::Object { 0xb9d6e0, 0xb9d708, 0xb9d730 } Next token is token 'a' (0xb9d758 'a') Shifting token 'a' (0xb9d758 'a') Entering state 2 Stack now 0 11 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xb9d758 'a') -> $$ = nterm item (0xb9d758 'a') Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xb9d780->Object::Object { 0xb9d6e0, 0xb9d708, 0xb9d730, 0xb9d758 } Next token is token 'p' (0xb9d780 'p'Exception caught: cleaning lookahead and stack 0xb9d780->Object::~Object { 0xb9d6e0, 0xb9d708, 0xb9d730, 0xb9d758, 0xb9d780 } 0xb9d758->Object::~Object { 0xb9d6e0, 0xb9d708, 0xb9d730, 0xb9d758 } 0xb9d730->Object::~Object { 0xb9d6e0, 0xb9d708, 0xb9d730 } 0xb9d708->Object::~Object { 0xb9d6e0, 0xb9d708 } 0xb9d6e0->Object::~Object { 0xb9d6e0 } exception caught: printer end { } ./c++.at:1360: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1360: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaT stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaR stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1360: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:859: $PREPARSER ./input stderr: ./c++.at:859: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:860: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy stderr: In file included from /usr/include/c++/12/vector:70, from input.hh:51, from input.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1741:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at input.cc:1580:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2006:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1741:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1566:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at input.cc:2431:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1741:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1566:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2414:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at input.cc:2402:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:1066: $PREPARSER ./input < in ======== Testing with C++ standard flags: '' ./c++.at:860: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1066: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1066: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1066: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1066: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1066: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1066: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./c++.at:1066: ./check ./c++.at:1066: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:1360: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaap stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x189b6e0->Object::Object { } Next token is token 'a' (0x189b6e0 'a') Shifting token 'a' (0x189b6e0 'a') Entering state 2 Stack now 0 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x189b6e0 'a') -> $$ = nterm item (0x189b6e0 'a') Entering state 11 Stack now 0 11 Reading a token 0x189b708->Object::Object { 0x189b6e0 } Next token is token 'a' (0x189b708 'a') Shifting token 'a' (0x189b708 'a') Entering state 2 Stack now 0 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x189b708 'a') -> $$ = nterm item (0x189b708 'a') Entering state 11 Stack now 0 11 11 Reading a token 0x189b730->Object::Object { 0x189b6e0, 0x189b708 } Next token is token 'a' (0x189b730 'a') Shifting token 'a' (0x189b730 'a') Entering state 2 Stack now 0 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x189b730 'a') -> $$ = nterm item (0x189b730 'a') Entering state 11 Stack now 0 11 11 11 Reading a token 0x189b758->Object::Object { 0x189b6e0, 0x189b708, 0x189b730 } Next token is token 'a' (0x189b758 'a') Shifting token 'a' (0x189b758 'a') Entering state 2 Stack now 0 11 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x189b758 'a') -> $$ = nterm item (0x189b758 'a') Entering state 11 Stack now 0 11 11 11 11 Reading a token 0x189b780->Object::Object { 0x189b6e0, 0x189b708, 0x189b730, 0x189b758 } Next token is token 'p' (0x189b780 'p'Exception caught: cleaning lookahead and stack 0x189b780->Object::~Object { 0x189b6e0, 0x189b708, 0x189b730, 0x189b758, 0x189b780 } 0x189b758->Object::~Object { 0x189b6e0, 0x189b708, 0x189b730, 0x189b758 } 0x189b730->Object::~Object { 0x189b6e0, 0x189b708, 0x189b730 } 0x189b708->Object::~Object { 0x189b6e0, 0x189b708 } 0x189b6e0->Object::~Object { 0x189b6e0 } exception caught: printer end { } ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x189b6e0->Object::Object { } Next token is token 'a' (0x189b6e0 'a') Shifting token 'a' (0x189b6e0 'a') Entering state 2 Stack now 0 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x189b6e0 'a') -> $$ = nterm item (0x189b6e0 'a') Entering state 11 Stack now 0 11 Reading a token 0x189b708->Object::Object { 0x189b6e0 } Next token is token 'a' (0x189b708 'a') Shifting token 'a' (0x189b708 'a') Entering state 2 Stack now 0 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x189b708 'a') -> $$ = nterm item (0x189b708 'a') Entering state 11 Stack now 0 11 11 Reading a token 0x189b730->Object::Object { 0x189b6e0, 0x189b708 } Next token is token 'a' (0x189b730 'a') Shifting token 'a' (0x189b730 'a') Entering state 2 Stack now 0 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x189b730 'a') -> $$ = nterm item (0x189b730 'a') Entering state 11 Stack now 0 11 11 11 Reading a token 0x189b758->Object::Object { 0x189b6e0, 0x189b708, 0x189b730 } Next token is token 'a' (0x189b758 'a') Shifting token 'a' (0x189b758 'a') Entering state 2 Stack now 0 11 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x189b758 'a') -> $$ = nterm item (0x189b758 'a') Entering state 11 Stack now 0 11 11 11 11 Reading a token 0x189b780->Object::Object { 0x189b6e0, 0x189b708, 0x189b730, 0x189b758 } Next token is token 'p' (0x189b780 'p'Exception caught: cleaning lookahead and stack 0x189b780->Object::~Object { 0x189b6e0, 0x189b708, 0x189b730, 0x189b758, 0x189b780 } 0x189b758->Object::~Object { 0x189b6e0, 0x189b708, 0x189b730, 0x189b758 } 0x189b730->Object::~Object { 0x189b6e0, 0x189b708, 0x189b730 } 0x189b708->Object::~Object { 0x189b6e0, 0x189b708 } 0x189b6e0->Object::~Object { 0x189b6e0 } exception caught: printer end { } ./c++.at:1360: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1360: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaT stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaR stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1360: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:860: $PREPARSER ./input stderr: ./c++.at:860: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:860: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:860: $PREPARSER ./input stderr: ./c++.at:860: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:860: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:860: $PREPARSER ./input stderr: ./c++.at:860: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:860: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1360: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaap stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xd1a6e0->Object::Object { } Next token is token 'a' (0xd1a6e0 'a') Shifting token 'a' (0xd1a6e0 'a') Entering state 2 Stack now 0 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xd1a6e0 'a') -> $$ = nterm item (0xd1a6e0 'a') Entering state 11 Stack now 0 11 Reading a token 0xd1a708->Object::Object { 0xd1a6e0 } Next token is token 'a' (0xd1a708 'a') Shifting token 'a' (0xd1a708 'a') Entering state 2 Stack now 0 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xd1a708 'a') -> $$ = nterm item (0xd1a708 'a') Entering state 11 Stack now 0 11 11 Reading a token 0xd1a730->Object::Object { 0xd1a6e0, 0xd1a708 } Next token is token 'a' (0xd1a730 'a') Shifting token 'a' (0xd1a730 'a') Entering state 2 Stack now 0 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xd1a730 'a') -> $$ = nterm item (0xd1a730 'a') Entering state 11 Stack now 0 11 11 11 Reading a token 0xd1a758->Object::Object { 0xd1a6e0, 0xd1a708, 0xd1a730 } Next token is token 'a' (0xd1a758 'a') Shifting token 'a' (0xd1a758 'a') Entering state 2 Stack now 0 11 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xd1a758 'a') -> $$ = nterm item (0xd1a758 'a') Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xd1a780->Object::Object { 0xd1a6e0, 0xd1a708, 0xd1a730, 0xd1a758 } Next token is token 'p' (0xd1a780 'p'Exception caught: cleaning lookahead and stack 0xd1a780->Object::~Object { 0xd1a6e0, 0xd1a708, 0xd1a730, 0xd1a758, 0xd1a780 } 0xd1a758->Object::~Object { 0xd1a6e0, 0xd1a708, 0xd1a730, 0xd1a758 } 0xd1a730->Object::~Object { 0xd1a6e0, 0xd1a708, 0xd1a730 } 0xd1a708->Object::~Object { 0xd1a6e0, 0xd1a708 } 0xd1a6e0->Object::~Object { 0xd1a6e0 } exception caught: printer end { } ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xd1a6e0->Object::Object { } Next token is token 'a' (0xd1a6e0 'a') Shifting token 'a' (0xd1a6e0 'a') Entering state 2 Stack now 0 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xd1a6e0 'a') -> $$ = nterm item (0xd1a6e0 'a') Entering state 11 Stack now 0 11 Reading a token 0xd1a708->Object::Object { 0xd1a6e0 } Next token is token 'a' (0xd1a708 'a') Shifting token 'a' (0xd1a708 'a') Entering state 2 Stack now 0 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xd1a708 'a') -> $$ = nterm item (0xd1a708 'a') Entering state 11 Stack now 0 11 11 Reading a token 0xd1a730->Object::Object { 0xd1a6e0, 0xd1a708 } Next token is token 'a' (0xd1a730 'a') Shifting token 'a' (0xd1a730 'a') Entering state 2 Stack now 0 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xd1a730 'a') -> $$ = nterm item (0xd1a730 'a') Entering state 11 Stack now 0 11 11 11 Reading a token 0xd1a758->Object::Object { 0xd1a6e0, 0xd1a708, 0xd1a730 } Next token is token 'a' (0xd1a758 'a') Shifting token 'a' (0xd1a758 'a') Entering state 2 Stack now 0 11 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xd1a758 'a') -> $$ = nterm item (0xd1a758 'a') Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xd1a780->Object::Object { 0xd1a6e0, 0xd1a708, 0xd1a730, 0xd1a758 } Next token is token 'p' (0xd1a780 'p'Exception caught: cleaning lookahead and stack 0xd1a780->Object::~Object { 0xd1a6e0, 0xd1a708, 0xd1a730, 0xd1a758, 0xd1a780 } 0xd1a758->Object::~Object { 0xd1a6e0, 0xd1a708, 0xd1a730, 0xd1a758 } 0xd1a730->Object::~Object { 0xd1a6e0, 0xd1a708, 0xd1a730 } 0xd1a708->Object::~Object { 0xd1a6e0, 0xd1a708 } 0xd1a6e0->Object::~Object { 0xd1a6e0 } exception caught: printer end { } ./c++.at:1360: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1360: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaT stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaR stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1360: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:860: $PREPARSER ./input stderr: ./c++.at:860: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:860: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.hh:51, from input.cc:76: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1741:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at input.cc:1580:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2006:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1741:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1566:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at input.cc:2431:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:1741:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1566:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2414:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at input.cc:2402:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:1066: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1066: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1066: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1066: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1066: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1066: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1066: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./c++.at:1066: ./check ./c++.at:1066: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:860: $PREPARSER ./input stderr: ./c++.at:860: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:860: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1360: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaap stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x14ce6e0->Object::Object { } Next token is token 'a' (0x14ce6e0 'a') Shifting token 'a' (0x14ce6e0 'a') Entering state 2 Stack now 0 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x14ce6e0 'a') -> $$ = nterm item (0x14ce6e0 'a') Entering state 11 Stack now 0 11 Reading a token 0x14ce708->Object::Object { 0x14ce6e0 } Next token is token 'a' (0x14ce708 'a') Shifting token 'a' (0x14ce708 'a') Entering state 2 Stack now 0 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x14ce708 'a') -> $$ = nterm item (0x14ce708 'a') Entering state 11 Stack now 0 11 11 Reading a token 0x14ce730->Object::Object { 0x14ce6e0, 0x14ce708 } Next token is token 'a' (0x14ce730 'a') Shifting token 'a' (0x14ce730 'a') Entering state 2 Stack now 0 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x14ce730 'a') -> $$ = nterm item (0x14ce730 'a') Entering state 11 Stack now 0 11 11 11 Reading a token 0x14ce758->Object::Object { 0x14ce6e0, 0x14ce708, 0x14ce730 } Next token is token 'a' (0x14ce758 'a') Shifting token 'a' (0x14ce758 'a') Entering state 2 Stack now 0 11 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x14ce758 'a') -> $$ = nterm item (0x14ce758 'a') Entering state 11 Stack now 0 11 11 11 11 Reading a token 0x14ce780->Object::Object { 0x14ce6e0, 0x14ce708, 0x14ce730, 0x14ce758 } Next token is token 'p' (0x14ce780 'p'Exception caught: cleaning lookahead and stack 0x14ce780->Object::~Object { 0x14ce6e0, 0x14ce708, 0x14ce730, 0x14ce758, 0x14ce780 } 0x14ce758->Object::~Object { 0x14ce6e0, 0x14ce708, 0x14ce730, 0x14ce758 } 0x14ce730->Object::~Object { 0x14ce6e0, 0x14ce708, 0x14ce730 } 0x14ce708->Object::~Object { 0x14ce6e0, 0x14ce708 } 0x14ce6e0->Object::~Object { 0x14ce6e0 } exception caught: printer end { } ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x14ce6e0->Object::Object { } Next token is token 'a' (0x14ce6e0 'a') Shifting token 'a' (0x14ce6e0 'a') Entering state 2 Stack now 0 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x14ce6e0 'a') -> $$ = nterm item (0x14ce6e0 'a') Entering state 11 Stack now 0 11 Reading a token 0x14ce708->Object::Object { 0x14ce6e0 } Next token is token 'a' (0x14ce708 'a') Shifting token 'a' (0x14ce708 'a') Entering state 2 Stack now 0 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x14ce708 'a') -> $$ = nterm item (0x14ce708 'a') Entering state 11 Stack now 0 11 11 Reading a token 0x14ce730->Object::Object { 0x14ce6e0, 0x14ce708 } Next token is token 'a' (0x14ce730 'a') Shifting token 'a' (0x14ce730 'a') Entering state 2 Stack now 0 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x14ce730 'a') -> $$ = nterm item (0x14ce730 'a') Entering state 11 Stack now 0 11 11 11 Reading a token 0x14ce758->Object::Object { 0x14ce6e0, 0x14ce708, 0x14ce730 } Next token is token 'a' (0x14ce758 'a') Shifting token 'a' (0x14ce758 'a') Entering state 2 Stack now 0 11 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x14ce758 'a') -> $$ = nterm item (0x14ce758 'a') Entering state 11 Stack now 0 11 11 11 11 Reading a token 0x14ce780->Object::Object { 0x14ce6e0, 0x14ce708, 0x14ce730, 0x14ce758 } Next token is token 'p' (0x14ce780 'p'Exception caught: cleaning lookahead and stack 0x14ce780->Object::~Object { 0x14ce6e0, 0x14ce708, 0x14ce730, 0x14ce758, 0x14ce780 } 0x14ce758->Object::~Object { 0x14ce6e0, 0x14ce708, 0x14ce730, 0x14ce758 } 0x14ce730->Object::~Object { 0x14ce6e0, 0x14ce708, 0x14ce730 } 0x14ce708->Object::~Object { 0x14ce6e0, 0x14ce708 } 0x14ce6e0->Object::~Object { 0x14ce6e0 } exception caught: printer end { } ./c++.at:1360: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1360: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaT stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaR stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1360: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:860: $PREPARSER ./input stderr: ./c++.at:860: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:860: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1360: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaap stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x17156e0->Object::Object { } Next token is token 'a' (0x17156e0 'a') Shifting token 'a' (0x17156e0 'a') Entering state 2 Stack now 0 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x17156e0 'a') -> $$ = nterm item (0x17156e0 'a') Entering state 11 Stack now 0 11 Reading a token 0x1715708->Object::Object { 0x17156e0 } Next token is token 'a' (0x1715708 'a') Shifting token 'a' (0x1715708 'a') Entering state 2 Stack now 0 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1715708 'a') -> $$ = nterm item (0x1715708 'a') Entering state 11 Stack now 0 11 11 Reading a token 0x1715730->Object::Object { 0x17156e0, 0x1715708 } Next token is token 'a' (0x1715730 'a') Shifting token 'a' (0x1715730 'a') Entering state 2 Stack now 0 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1715730 'a') -> $$ = nterm item (0x1715730 'a') Entering state 11 Stack now 0 11 11 11 Reading a token 0x1715758->Object::Object { 0x17156e0, 0x1715708, 0x1715730 } Next token is token 'a' (0x1715758 'a') Shifting token 'a' (0x1715758 'a') Entering state 2 Stack now 0 11 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1715758 'a') -> $$ = nterm item (0x1715758 'a') Entering state 11 Stack now 0 11 11 11 11 Reading a token 0x1715780->Object::Object { 0x17156e0, 0x1715708, 0x1715730, 0x1715758 } Next token is token 'p' (0x1715780 'p'Exception caught: cleaning lookahead and stack 0x1715780->Object::~Object { 0x17156e0, 0x1715708, 0x1715730, 0x1715758, 0x1715780 } 0x1715758->Object::~Object { 0x17156e0, 0x1715708, 0x1715730, 0x1715758 } 0x1715730->Object::~Object { 0x17156e0, 0x1715708, 0x1715730 } 0x1715708->Object::~Object { 0x17156e0, 0x1715708 } 0x17156e0->Object::~Object { 0x17156e0 } exception caught: printer end { } ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x17156e0->Object::Object { } Next token is token 'a' (0x17156e0 'a') Shifting token 'a' (0x17156e0 'a') Entering state 2 Stack now 0 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x17156e0 'a') -> $$ = nterm item (0x17156e0 'a') Entering state 11 Stack now 0 11 Reading a token 0x1715708->Object::Object { 0x17156e0 } Next token is token 'a' (0x1715708 'a') Shifting token 'a' (0x1715708 'a') Entering state 2 Stack now 0 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1715708 'a') -> $$ = nterm item (0x1715708 'a') Entering state 11 Stack now 0 11 11 Reading a token 0x1715730->Object::Object { 0x17156e0, 0x1715708 } Next token is token 'a' (0x1715730 'a') Shifting token 'a' (0x1715730 'a') Entering state 2 Stack now 0 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1715730 'a') -> $$ = nterm item (0x1715730 'a') Entering state 11 Stack now 0 11 11 11 Reading a token 0x1715758->Object::Object { 0x17156e0, 0x1715708, 0x1715730 } Next token is token 'a' (0x1715758 'a') Shifting token 'a' (0x1715758 'a') Entering state 2 Stack now 0 11 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1715758 'a') -> $$ = nterm item (0x1715758 'a') Entering state 11 Stack now 0 11 11 11 11 Reading a token 0x1715780->Object::Object { 0x17156e0, 0x1715708, 0x1715730, 0x1715758 } Next token is token 'p' (0x1715780 'p'Exception caught: cleaning lookahead and stack 0x1715780->Object::~Object { 0x17156e0, 0x1715708, 0x1715730, 0x1715758, 0x1715780 } 0x1715758->Object::~Object { 0x17156e0, 0x1715708, 0x1715730, 0x1715758 } 0x1715730->Object::~Object { 0x17156e0, 0x1715708, 0x1715730 } 0x1715708->Object::~Object { 0x17156e0, 0x1715708 } 0x17156e0->Object::~Object { 0x17156e0 } exception caught: printer end { } ./c++.at:1360: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1360: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaT stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaR stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1360: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:860: $PREPARSER ./input stderr: ./c++.at:860: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:860: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1066: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1066: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1066: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1066: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1066: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1066: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1066: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./c++.at:1066: ./check ./c++.at:1066: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o input input.cc scan.cc $LIBS stderr: stdout: ./c++.at:860: $PREPARSER ./input stderr: ./c++.at:860: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 679. c++.at:854: ok 686. c++.at:1361: testing Exception safety without error recovery ... ./c++.at:1361: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o exceptions exceptions.cc $LIBS stderr: stdout: ./c++.at:1361: ./exceptions || exit 77 stderr: Inner caught Outer caught ./c++.at:1361: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc --report=all input.yy ======== Testing with C++ standard flags: '' ./c++.at:1361: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1360: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaap stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xe296e0->Object::Object { } Next token is token 'a' (0xe296e0 'a') Shifting token 'a' (0xe296e0 'a') Entering state 2 Stack now 0 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xe296e0 'a') -> $$ = nterm item (0xe296e0 'a') Entering state 11 Stack now 0 11 Reading a token 0xe29708->Object::Object { 0xe296e0 } Next token is token 'a' (0xe29708 'a') Shifting token 'a' (0xe29708 'a') Entering state 2 Stack now 0 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xe29708 'a') -> $$ = nterm item (0xe29708 'a') Entering state 11 Stack now 0 11 11 Reading a token 0xe29730->Object::Object { 0xe296e0, 0xe29708 } Next token is token 'a' (0xe29730 'a') Shifting token 'a' (0xe29730 'a') Entering state 2 Stack now 0 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xe29730 'a') -> $$ = nterm item (0xe29730 'a') Entering state 11 Stack now 0 11 11 11 Reading a token 0xe29758->Object::Object { 0xe296e0, 0xe29708, 0xe29730 } Next token is token 'a' (0xe29758 'a') Shifting token 'a' (0xe29758 'a') Entering state 2 Stack now 0 11 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xe29758 'a') -> $$ = nterm item (0xe29758 'a') Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xe29780->Object::Object { 0xe296e0, 0xe29708, 0xe29730, 0xe29758 } Next token is token 'p' (0xe29780 'p'Exception caught: cleaning lookahead and stack 0xe29780->Object::~Object { 0xe296e0, 0xe29708, 0xe29730, 0xe29758, 0xe29780 } 0xe29758->Object::~Object { 0xe296e0, 0xe29708, 0xe29730, 0xe29758 } 0xe29730->Object::~Object { 0xe296e0, 0xe29708, 0xe29730 } 0xe29708->Object::~Object { 0xe296e0, 0xe29708 } 0xe296e0->Object::~Object { 0xe296e0 } exception caught: printer end { } ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xe296e0->Object::Object { } Next token is token 'a' (0xe296e0 'a') Shifting token 'a' (0xe296e0 'a') Entering state 2 Stack now 0 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xe296e0 'a') -> $$ = nterm item (0xe296e0 'a') Entering state 11 Stack now 0 11 Reading a token 0xe29708->Object::Object { 0xe296e0 } Next token is token 'a' (0xe29708 'a') Shifting token 'a' (0xe29708 'a') Entering state 2 Stack now 0 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xe29708 'a') -> $$ = nterm item (0xe29708 'a') Entering state 11 Stack now 0 11 11 Reading a token 0xe29730->Object::Object { 0xe296e0, 0xe29708 } Next token is token 'a' (0xe29730 'a') Shifting token 'a' (0xe29730 'a') Entering state 2 Stack now 0 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xe29730 'a') -> $$ = nterm item (0xe29730 'a') Entering state 11 Stack now 0 11 11 11 Reading a token 0xe29758->Object::Object { 0xe296e0, 0xe29708, 0xe29730 } Next token is token 'a' (0xe29758 'a') Shifting token 'a' (0xe29758 'a') Entering state 2 Stack now 0 11 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xe29758 'a') -> $$ = nterm item (0xe29758 'a') Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xe29780->Object::Object { 0xe296e0, 0xe29708, 0xe29730, 0xe29758 } Next token is token 'p' (0xe29780 'p'Exception caught: cleaning lookahead and stack 0xe29780->Object::~Object { 0xe296e0, 0xe29708, 0xe29730, 0xe29758, 0xe29780 } 0xe29758->Object::~Object { 0xe296e0, 0xe29708, 0xe29730, 0xe29758 } 0xe29730->Object::~Object { 0xe296e0, 0xe29708, 0xe29730 } 0xe29708->Object::~Object { 0xe296e0, 0xe29708 } 0xe296e0->Object::~Object { 0xe296e0 } exception caught: printer end { } ./c++.at:1360: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1360: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaT stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaR stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1360: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1361: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaap stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x16886e0->Object::Object { } Next token is token 'a' (0x16886e0 'a') Shifting token 'a' (0x16886e0 'a') Entering state 1 Stack now 0 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x16886e0 'a') -> $$ = nterm item (0x16886e0 'a') Entering state 10 Stack now 0 10 Reading a token 0x1688708->Object::Object { 0x16886e0 } Next token is token 'a' (0x1688708 'a') Shifting token 'a' (0x1688708 'a') Entering state 1 Stack now 0 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1688708 'a') -> $$ = nterm item (0x1688708 'a') Entering state 10 Stack now 0 10 10 Reading a token 0x1688730->Object::Object { 0x16886e0, 0x1688708 } Next token is token 'a' (0x1688730 'a') Shifting token 'a' (0x1688730 'a') Entering state 1 Stack now 0 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1688730 'a') -> $$ = nterm item (0x1688730 'a') Entering state 10 Stack now 0 10 10 10 Reading a token 0x1688758->Object::Object { 0x16886e0, 0x1688708, 0x1688730 } Next token is token 'a' (0x1688758 'a') Shifting token 'a' (0x1688758 'a') Entering state 1 Stack now 0 10 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1688758 'a') -> $$ = nterm item (0x1688758 'a') Entering state 10 Stack now 0 10 10 10 10 Reading a token 0x1688780->Object::Object { 0x16886e0, 0x1688708, 0x1688730, 0x1688758 } Next token is token 'p' (0x1688780 'p'Exception caught: cleaning lookahead and stack 0x1688780->Object::~Object { 0x16886e0, 0x1688708, 0x1688730, 0x1688758, 0x1688780 } 0x1688758->Object::~Object { 0x16886e0, 0x1688708, 0x1688730, 0x1688758 } 0x1688730->Object::~Object { 0x16886e0, 0x1688708, 0x1688730 } 0x1688708->Object::~Object { 0x16886e0, 0x1688708 } 0x16886e0->Object::~Object { 0x16886e0 } exception caught: printer end { } ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x16886e0->Object::Object { } Next token is token 'a' (0x16886e0 'a') Shifting token 'a' (0x16886e0 'a') Entering state 1 Stack now 0 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x16886e0 'a') -> $$ = nterm item (0x16886e0 'a') Entering state 10 Stack now 0 10 Reading a token 0x1688708->Object::Object { 0x16886e0 } Next token is token 'a' (0x1688708 'a') Shifting token 'a' (0x1688708 'a') Entering state 1 Stack now 0 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1688708 'a') -> $$ = nterm item (0x1688708 'a') Entering state 10 Stack now 0 10 10 Reading a token 0x1688730->Object::Object { 0x16886e0, 0x1688708 } Next token is token 'a' (0x1688730 'a') Shifting token 'a' (0x1688730 'a') Entering state 1 Stack now 0 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1688730 'a') -> $$ = nterm item (0x1688730 'a') Entering state 10 Stack now 0 10 10 10 Reading a token 0x1688758->Object::Object { 0x16886e0, 0x1688708, 0x1688730 } Next token is token 'a' (0x1688758 'a') Shifting token 'a' (0x1688758 'a') Entering state 1 Stack now 0 10 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1688758 'a') -> $$ = nterm item (0x1688758 'a') Entering state 10 Stack now 0 10 10 10 10 Reading a token 0x1688780->Object::Object { 0x16886e0, 0x1688708, 0x1688730, 0x1688758 } Next token is token 'p' (0x1688780 'p'Exception caught: cleaning lookahead and stack 0x1688780->Object::~Object { 0x16886e0, 0x1688708, 0x1688730, 0x1688758, 0x1688780 } 0x1688758->Object::~Object { 0x16886e0, 0x1688708, 0x1688730, 0x1688758 } 0x1688730->Object::~Object { 0x16886e0, 0x1688708, 0x1688730 } 0x1688708->Object::~Object { 0x16886e0, 0x1688708 } 0x16886e0->Object::~Object { 0x16886e0 } exception caught: printer end { } ./c++.at:1361: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1361: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaT stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaR stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1361: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1360: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaap stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x14646e0->Object::Object { } Next token is token 'a' (0x14646e0 'a') Shifting token 'a' (0x14646e0 'a') Entering state 2 Stack now 0 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x14646e0 'a') -> $$ = nterm item (0x14646e0 'a') Entering state 11 Stack now 0 11 Reading a token 0x1464708->Object::Object { 0x14646e0 } Next token is token 'a' (0x1464708 'a') Shifting token 'a' (0x1464708 'a') Entering state 2 Stack now 0 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1464708 'a') -> $$ = nterm item (0x1464708 'a') Entering state 11 Stack now 0 11 11 Reading a token 0x1464730->Object::Object { 0x14646e0, 0x1464708 } Next token is token 'a' (0x1464730 'a') Shifting token 'a' (0x1464730 'a') Entering state 2 Stack now 0 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1464730 'a') -> $$ = nterm item (0x1464730 'a') Entering state 11 Stack now 0 11 11 11 Reading a token 0x1464758->Object::Object { 0x14646e0, 0x1464708, 0x1464730 } Next token is token 'a' (0x1464758 'a') Shifting token 'a' (0x1464758 'a') Entering state 2 Stack now 0 11 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1464758 'a') -> $$ = nterm item (0x1464758 'a') Entering state 11 Stack now 0 11 11 11 11 Reading a token 0x1464780->Object::Object { 0x14646e0, 0x1464708, 0x1464730, 0x1464758 } Next token is token 'p' (0x1464780 'p'Exception caught: cleaning lookahead and stack 0x1464780->Object::~Object { 0x14646e0, 0x1464708, 0x1464730, 0x1464758, 0x1464780 } 0x1464758->Object::~Object { 0x14646e0, 0x1464708, 0x1464730, 0x1464758 } 0x1464730->Object::~Object { 0x14646e0, 0x1464708, 0x1464730 } 0x1464708->Object::~Object { 0x14646e0, 0x1464708 } 0x14646e0->Object::~Object { 0x14646e0 } exception caught: printer end { } ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x14646e0->Object::Object { } Next token is token 'a' (0x14646e0 'a') Shifting token 'a' (0x14646e0 'a') Entering state 2 Stack now 0 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x14646e0 'a') -> $$ = nterm item (0x14646e0 'a') Entering state 11 Stack now 0 11 Reading a token 0x1464708->Object::Object { 0x14646e0 } Next token is token 'a' (0x1464708 'a') Shifting token 'a' (0x1464708 'a') Entering state 2 Stack now 0 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1464708 'a') -> $$ = nterm item (0x1464708 'a') Entering state 11 Stack now 0 11 11 Reading a token 0x1464730->Object::Object { 0x14646e0, 0x1464708 } Next token is token 'a' (0x1464730 'a') Shifting token 'a' (0x1464730 'a') Entering state 2 Stack now 0 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1464730 'a') -> $$ = nterm item (0x1464730 'a') Entering state 11 Stack now 0 11 11 11 Reading a token 0x1464758->Object::Object { 0x14646e0, 0x1464708, 0x1464730 } Next token is token 'a' (0x1464758 'a') Shifting token 'a' (0x1464758 'a') Entering state 2 Stack now 0 11 11 11 2 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1464758 'a') -> $$ = nterm item (0x1464758 'a') Entering state 11 Stack now 0 11 11 11 11 Reading a token 0x1464780->Object::Object { 0x14646e0, 0x1464708, 0x1464730, 0x1464758 } Next token is token 'p' (0x1464780 'p'Exception caught: cleaning lookahead and stack 0x1464780->Object::~Object { 0x14646e0, 0x1464708, 0x1464730, 0x1464758, 0x1464780 } 0x1464758->Object::~Object { 0x14646e0, 0x1464708, 0x1464730, 0x1464758 } 0x1464730->Object::~Object { 0x14646e0, 0x1464708, 0x1464730 } 0x1464708->Object::~Object { 0x14646e0, 0x1464708 } 0x14646e0->Object::~Object { 0x14646e0 } exception caught: printer end { } ./c++.at:1360: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1360: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaT stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1360: $PREPARSER ./input aaaaR stderr: ./c++.at:1360: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 685. c++.at:1360: ok 687. c++.at:1362: testing Exception safety with error recovery api.value.type=variant ... ./c++.at:1362: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o exceptions exceptions.cc $LIBS stderr: stdout: ./c++.at:1362: ./exceptions || exit 77 stderr: Inner caught Outer caught ./c++.at:1362: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc --report=all input.yy ======== Testing with C++ standard flags: '' ./c++.at:1362: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1361: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaap stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x15f76e0->Object::Object { } Next token is token 'a' (0x15f76e0 'a') Shifting token 'a' (0x15f76e0 'a') Entering state 1 Stack now 0 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x15f76e0 'a') -> $$ = nterm item (0x15f76e0 'a') Entering state 10 Stack now 0 10 Reading a token 0x15f7708->Object::Object { 0x15f76e0 } Next token is token 'a' (0x15f7708 'a') Shifting token 'a' (0x15f7708 'a') Entering state 1 Stack now 0 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x15f7708 'a') -> $$ = nterm item (0x15f7708 'a') Entering state 10 Stack now 0 10 10 Reading a token 0x15f7730->Object::Object { 0x15f76e0, 0x15f7708 } Next token is token 'a' (0x15f7730 'a') Shifting token 'a' (0x15f7730 'a') Entering state 1 Stack now 0 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x15f7730 'a') -> $$ = nterm item (0x15f7730 'a') Entering state 10 Stack now 0 10 10 10 Reading a token 0x15f7758->Object::Object { 0x15f76e0, 0x15f7708, 0x15f7730 } Next token is token 'a' (0x15f7758 'a') Shifting token 'a' (0x15f7758 'a') Entering state 1 Stack now 0 10 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x15f7758 'a') -> $$ = nterm item (0x15f7758 'a') Entering state 10 Stack now 0 10 10 10 10 Reading a token 0x15f7780->Object::Object { 0x15f76e0, 0x15f7708, 0x15f7730, 0x15f7758 } Next token is token 'p' (0x15f7780 'p'Exception caught: cleaning lookahead and stack 0x15f7780->Object::~Object { 0x15f76e0, 0x15f7708, 0x15f7730, 0x15f7758, 0x15f7780 } 0x15f7758->Object::~Object { 0x15f76e0, 0x15f7708, 0x15f7730, 0x15f7758 } 0x15f7730->Object::~Object { 0x15f76e0, 0x15f7708, 0x15f7730 } 0x15f7708->Object::~Object { 0x15f76e0, 0x15f7708 } 0x15f76e0->Object::~Object { 0x15f76e0 } exception caught: printer end { } ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x15f76e0->Object::Object { } Next token is token 'a' (0x15f76e0 'a') Shifting token 'a' (0x15f76e0 'a') Entering state 1 Stack now 0 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x15f76e0 'a') -> $$ = nterm item (0x15f76e0 'a') Entering state 10 Stack now 0 10 Reading a token 0x15f7708->Object::Object { 0x15f76e0 } Next token is token 'a' (0x15f7708 'a') Shifting token 'a' (0x15f7708 'a') Entering state 1 Stack now 0 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x15f7708 'a') -> $$ = nterm item (0x15f7708 'a') Entering state 10 Stack now 0 10 10 Reading a token 0x15f7730->Object::Object { 0x15f76e0, 0x15f7708 } Next token is token 'a' (0x15f7730 'a') Shifting token 'a' (0x15f7730 'a') Entering state 1 Stack now 0 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x15f7730 'a') -> $$ = nterm item (0x15f7730 'a') Entering state 10 Stack now 0 10 10 10 Reading a token 0x15f7758->Object::Object { 0x15f76e0, 0x15f7708, 0x15f7730 } Next token is token 'a' (0x15f7758 'a') Shifting token 'a' (0x15f7758 'a') Entering state 1 Stack now 0 10 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x15f7758 'a') -> $$ = nterm item (0x15f7758 'a') Entering state 10 Stack now 0 10 10 10 10 Reading a token 0x15f7780->Object::Object { 0x15f76e0, 0x15f7708, 0x15f7730, 0x15f7758 } Next token is token 'p' (0x15f7780 'p'Exception caught: cleaning lookahead and stack 0x15f7780->Object::~Object { 0x15f76e0, 0x15f7708, 0x15f7730, 0x15f7758, 0x15f7780 } 0x15f7758->Object::~Object { 0x15f76e0, 0x15f7708, 0x15f7730, 0x15f7758 } 0x15f7730->Object::~Object { 0x15f76e0, 0x15f7708, 0x15f7730 } 0x15f7708->Object::~Object { 0x15f76e0, 0x15f7708 } 0x15f76e0->Object::~Object { 0x15f76e0 } exception caught: printer end { } ./c++.at:1361: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1361: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaT stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaR stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1361: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1066: $PREPARSER ./input < in stderr: error: invalid expression caught error error: invalid character caught error ./c++.at:1066: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1066: $PREPARSER ./input < in stderr: error: invalid expression ./c++.at:1066: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1066: $PREPARSER ./input < in stderr: error: invalid character ./c++.at:1066: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 684. c++.at:1066: ok 688. c++.at:1363: testing Exception safety without error recovery api.value.type=variant ... ./c++.at:1363: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o exceptions exceptions.cc $LIBS stderr: stdout: ./c++.at:1363: ./exceptions || exit 77 stderr: Inner caught Outer caught ./c++.at:1363: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc --report=all input.yy ======== Testing with C++ standard flags: '' ./c++.at:1363: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:63, from input.cc:148: /usr/include/c++/12/bits/stl_uninitialized.h: In function '_ForwardIterator std::__do_uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]': /usr/include/c++/12/bits/stl_uninitialized.h:113:5: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 113 | __do_uninit_copy(_InputIterator __first, _InputIterator __last, | ^~~~~~~~~~~~~~~~ /usr/include/c++/12/bits/stl_uninitialized.h:113:5: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 In file included from /usr/include/c++/12/vector:70: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In static member function 'static _ForwardIterator std::__uninitialized_copy<_TrivialValueTypes>::__uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = std::move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; bool _TrivialValueTypes = false]', inlined from '_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]' at /usr/include/c++/12/bits/stl_uninitialized.h:185:15, inlined from '_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; _Tp = yy::parser::stack_symbol_type]' at /usr/include/c++/12/bits/stl_uninitialized.h:372:37, inlined from '_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = yy::parser::stack_symbol_type*; _ForwardIterator = yy::parser::stack_symbol_type*; _Allocator = allocator]' at /usr/include/c++/12/bits/stl_uninitialized.h:397:2, inlined from 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/vector.tcc:487:3: /usr/include/c++/12/bits/stl_uninitialized.h:137:39: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 137 | { return std::__do_uninit_copy(__first, __last, __result); } | ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~ In static member function 'static _ForwardIterator std::__uninitialized_copy<_TrivialValueTypes>::__uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = std::move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; bool _TrivialValueTypes = false]', inlined from '_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]' at /usr/include/c++/12/bits/stl_uninitialized.h:185:15, inlined from '_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; _Tp = yy::parser::stack_symbol_type]' at /usr/include/c++/12/bits/stl_uninitialized.h:372:37, inlined from '_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = yy::parser::stack_symbol_type*; _ForwardIterator = yy::parser::stack_symbol_type*; _Allocator = allocator]' at /usr/include/c++/12/bits/stl_uninitialized.h:397:2, inlined from 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/vector.tcc:494:3: /usr/include/c++/12/bits/stl_uninitialized.h:137:39: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 137 | { return std::__do_uninit_copy(__first, __last, __result); } | ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19, inlined from 'virtual int yy::parser::parse()' at input.cc:2039:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:1362: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaap stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbeb51ccc->Object::Object { } 0xbeb51d58->Object::Object { 0xbeb51ccc } 0xbeb51ccc->Object::~Object { 0xbeb51ccc, 0xbeb51d58 } Next token is token 'a' (0xbeb51d58 'a') 0xbeb51cc8->Object::Object { 0xbeb51d58 } 0xbeb51d58->Object::~Object { 0xbeb51cc8, 0xbeb51d58 } Shifting token 'a' (0xbeb51cc8 'a') 0xa810b0->Object::Object { 0xbeb51cc8 } 0xbeb51cc8->Object::~Object { 0xa810b0, 0xbeb51cc8 } Entering state 2 Stack now 0 2 0xbeb51d68->Object::Object { 0xa810b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0xa810b0 'a') -> $$ = nterm item (0xbeb51d68 'a') 0xa810b0->Object::~Object { 0xa810b0, 0xbeb51d68 } 0xa810b0->Object::Object { 0xbeb51d68 } 0xbeb51d68->Object::~Object { 0xa810b0, 0xbeb51d68 } Entering state 11 Stack now 0 11 Reading a token 0xbeb51ccc->Object::Object { 0xa810b0 } 0xbeb51d58->Object::Object { 0xa810b0, 0xbeb51ccc } 0xbeb51ccc->Object::~Object { 0xa810b0, 0xbeb51ccc, 0xbeb51d58 } Next token is token 'a' (0xbeb51d58 'a') 0xbeb51cc8->Object::Object { 0xa810b0, 0xbeb51d58 } 0xbeb51d58->Object::~Object { 0xa810b0, 0xbeb51cc8, 0xbeb51d58 } Shifting token 'a' (0xbeb51cc8 'a') 0xa810c0->Object::Object { 0xa810b0, 0xbeb51cc8 } 0xbeb51cc8->Object::~Object { 0xa810b0, 0xa810c0, 0xbeb51cc8 } Entering state 2 Stack now 0 11 2 0xbeb51d68->Object::Object { 0xa810b0, 0xa810c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0xa810c0 'a') -> $$ = nterm item (0xbeb51d68 'a') 0xa810c0->Object::~Object { 0xa810b0, 0xa810c0, 0xbeb51d68 } 0xa810c0->Object::Object { 0xa810b0, 0xbeb51d68 } 0xbeb51d68->Object::~Object { 0xa810b0, 0xa810c0, 0xbeb51d68 } Entering state 11 Stack now 0 11 11 Reading a token 0xbeb51ccc->Object::Object { 0xa810b0, 0xa810c0 } 0xbeb51d58->Object::Object { 0xa810b0, 0xa810c0, 0xbeb51ccc } 0xbeb51ccc->Object::~Object { 0xa810b0, 0xa810c0, 0xbeb51ccc, 0xbeb51d58 } Next token is token 'a' (0xbeb51d58 'a') 0xbeb51cc8->Object::Object { 0xa810b0, 0xa810c0, 0xbeb51d58 } 0xbeb51d58->Object::~Object { 0xa810b0, 0xa810c0, 0xbeb51cc8, 0xbeb51d58 } Shifting token 'a' (0xbeb51cc8 'a') 0xa810d0->Object::Object { 0xa810b0, 0xa810c0, 0xbeb51cc8 } 0xbeb51cc8->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51cc8 } Entering state 2 Stack now 0 11 11 2 0xbeb51d68->Object::Object { 0xa810b0, 0xa810c0, 0xa810d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0xa810d0 'a') -> $$ = nterm item (0xbeb51d68 'a') 0xa810d0->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51d68 } 0xa810d0->Object::Object { 0xa810b0, 0xa810c0, 0xbeb51d68 } 0xbeb51d68->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51d68 } Entering state 11 Stack now 0 11 11 11 Reading a token 0xbeb51ccc->Object::Object { 0xa810b0, 0xa810c0, 0xa810d0 } 0xbeb51d58->Object::Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51ccc } 0xbeb51ccc->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51ccc, 0xbeb51d58 } Next token is token 'a' (0xbeb51d58 'a') 0xbeb51cc8->Object::Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51d58 } 0xbeb51d58->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51cc8, 0xbeb51d58 } Shifting token 'a' (0xbeb51cc8 'a') 0xa810e0->Object::Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51cc8 } 0xbeb51cc8->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xa810e0, 0xbeb51cc8 } Entering state 2 Stack now 0 11 11 11 2 0xbeb51d68->Object::Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xa810e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0xa810e0 'a') -> $$ = nterm item (0xbeb51d68 'a') 0xa810e0->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xa810e0, 0xbeb51d68 } 0xa810e0->Object::Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51d68 } 0xbeb51d68->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xa810e0, 0xbeb51d68 } Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xbeb51ccc->Object::Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xa810e0 } 0xbeb51d58->Object::Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xa810e0, 0xbeb51ccc } 0xbeb51ccc->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xa810e0, 0xbeb51ccc, 0xbeb51d58 } Next token is token 'p' (0xbeb51d58 'p'Exception caught: cleaning lookahead and stack 0xa810e0->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xa810e0, 0xbeb51d58 } 0xa810d0->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51d58 } 0xa810c0->Object::~Object { 0xa810b0, 0xa810c0, 0xbeb51d58 } 0xa810b0->Object::~Object { 0xa810b0, 0xbeb51d58 } 0xbeb51d58->Object::~Object { 0xbeb51d58 } exception caught: printer end { } ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbeb51ccc->Object::Object { } 0xbeb51d58->Object::Object { 0xbeb51ccc } 0xbeb51ccc->Object::~Object { 0xbeb51ccc, 0xbeb51d58 } Next token is token 'a' (0xbeb51d58 'a') 0xbeb51cc8->Object::Object { 0xbeb51d58 } 0xbeb51d58->Object::~Object { 0xbeb51cc8, 0xbeb51d58 } Shifting token 'a' (0xbeb51cc8 'a') 0xa810b0->Object::Object { 0xbeb51cc8 } 0xbeb51cc8->Object::~Object { 0xa810b0, 0xbeb51cc8 } Entering state 2 Stack now 0 2 0xbeb51d68->Object::Object { 0xa810b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0xa810b0 'a') -> $$ = nterm item (0xbeb51d68 'a') 0xa810b0->Object::~Object { 0xa810b0, 0xbeb51d68 } 0xa810b0->Object::Object { 0xbeb51d68 } 0xbeb51d68->Object::~Object { 0xa810b0, 0xbeb51d68 } Entering state 11 Stack now 0 11 Reading a token 0xbeb51ccc->Object::Object { 0xa810b0 } 0xbeb51d58->Object::Object { 0xa810b0, 0xbeb51ccc } 0xbeb51ccc->Object::~Object { 0xa810b0, 0xbeb51ccc, 0xbeb51d58 } Next token is token 'a' (0xbeb51d58 'a') 0xbeb51cc8->Object::Object { 0xa810b0, 0xbeb51d58 } 0xbeb51d58->Object::~Object { 0xa810b0, 0xbeb51cc8, 0xbeb51d58 } Shifting token 'a' (0xbeb51cc8 'a') 0xa810c0->Object::Object { 0xa810b0, 0xbeb51cc8 } 0xbeb51cc8->Object::~Object { 0xa810b0, 0xa810c0, 0xbeb51cc8 } Entering state 2 Stack now 0 11 2 0xbeb51d68->Object::Object { 0xa810b0, 0xa810c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0xa810c0 'a') -> $$ = nterm item (0xbeb51d68 'a') 0xa810c0->Object::~Object { 0xa810b0, 0xa810c0, 0xbeb51d68 } 0xa810c0->Object::Object { 0xa810b0, 0xbeb51d68 } 0xbeb51d68->Object::~Object { 0xa810b0, 0xa810c0, 0xbeb51d68 } Entering state 11 Stack now 0 11 11 Reading a token 0xbeb51ccc->Object::Object { 0xa810b0, 0xa810c0 } 0xbeb51d58->Object::Object { 0xa810b0, 0xa810c0, 0xbeb51ccc } 0xbeb51ccc->Object::~Object { 0xa810b0, 0xa810c0, 0xbeb51ccc, 0xbeb51d58 } Next token is token 'a' (0xbeb51d58 'a') 0xbeb51cc8->Object::Object { 0xa810b0, 0xa810c0, 0xbeb51d58 } 0xbeb51d58->Object::~Object { 0xa810b0, 0xa810c0, 0xbeb51cc8, 0xbeb51d58 } Shifting token 'a' (0xbeb51cc8 'a') 0xa810d0->Object::Object { 0xa810b0, 0xa810c0, 0xbeb51cc8 } 0xbeb51cc8->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51cc8 } Entering state 2 Stack now 0 11 11 2 0xbeb51d68->Object::Object { 0xa810b0, 0xa810c0, 0xa810d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0xa810d0 'a') -> $$ = nterm item (0xbeb51d68 'a') 0xa810d0->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51d68 } 0xa810d0->Object::Object { 0xa810b0, 0xa810c0, 0xbeb51d68 } 0xbeb51d68->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51d68 } Entering state 11 Stack now 0 11 11 11 Reading a token 0xbeb51ccc->Object::Object { 0xa810b0, 0xa810c0, 0xa810d0 } 0xbeb51d58->Object::Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51ccc } 0xbeb51ccc->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51ccc, 0xbeb51d58 } Next token is token 'a' (0xbeb51d58 'a') 0xbeb51cc8->Object::Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51d58 } 0xbeb51d58->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51cc8, 0xbeb51d58 } Shifting token 'a' (0xbeb51cc8 'a') 0xa810e0->Object::Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51cc8 } 0xbeb51cc8->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xa810e0, 0xbeb51cc8 } Entering state 2 Stack now 0 11 11 11 2 0xbeb51d68->Object::Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xa810e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0xa810e0 'a') -> $$ = nterm item (0xbeb51d68 'a') 0xa810e0->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xa810e0, 0xbeb51d68 } 0xa810e0->Object::Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51d68 } 0xbeb51d68->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xa810e0, 0xbeb51d68 } Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xbeb51ccc->Object::Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xa810e0 } 0xbeb51d58->Object::Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xa810e0, 0xbeb51ccc } 0xbeb51ccc->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xa810e0, 0xbeb51ccc, 0xbeb51d58 } Next token is token 'p' (0xbeb51d58 'p'Exception caught: cleaning lookahead and stack 0xa810e0->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xa810e0, 0xbeb51d58 } 0xa810d0->Object::~Object { 0xa810b0, 0xa810c0, 0xa810d0, 0xbeb51d58 } 0xa810c0->Object::~Object { 0xa810b0, 0xa810c0, 0xbeb51d58 } 0xa810b0->Object::~Object { 0xa810b0, 0xbeb51d58 } 0xbeb51d58->Object::~Object { 0xbeb51d58 } exception caught: printer end { } ./c++.at:1362: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1362: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaT stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaR stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1362: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1361: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaap stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xa3c6e0->Object::Object { } Next token is token 'a' (0xa3c6e0 'a') Shifting token 'a' (0xa3c6e0 'a') Entering state 1 Stack now 0 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xa3c6e0 'a') -> $$ = nterm item (0xa3c6e0 'a') Entering state 10 Stack now 0 10 Reading a token 0xa3c708->Object::Object { 0xa3c6e0 } Next token is token 'a' (0xa3c708 'a') Shifting token 'a' (0xa3c708 'a') Entering state 1 Stack now 0 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xa3c708 'a') -> $$ = nterm item (0xa3c708 'a') Entering state 10 Stack now 0 10 10 Reading a token 0xa3c730->Object::Object { 0xa3c6e0, 0xa3c708 } Next token is token 'a' (0xa3c730 'a') Shifting token 'a' (0xa3c730 'a') Entering state 1 Stack now 0 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xa3c730 'a') -> $$ = nterm item (0xa3c730 'a') Entering state 10 Stack now 0 10 10 10 Reading a token 0xa3c758->Object::Object { 0xa3c6e0, 0xa3c708, 0xa3c730 } Next token is token 'a' (0xa3c758 'a') Shifting token 'a' (0xa3c758 'a') Entering state 1 Stack now 0 10 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xa3c758 'a') -> $$ = nterm item (0xa3c758 'a') Entering state 10 Stack now 0 10 10 10 10 Reading a token 0xa3c780->Object::Object { 0xa3c6e0, 0xa3c708, 0xa3c730, 0xa3c758 } Next token is token 'p' (0xa3c780 'p'Exception caught: cleaning lookahead and stack 0xa3c780->Object::~Object { 0xa3c6e0, 0xa3c708, 0xa3c730, 0xa3c758, 0xa3c780 } 0xa3c758->Object::~Object { 0xa3c6e0, 0xa3c708, 0xa3c730, 0xa3c758 } 0xa3c730->Object::~Object { 0xa3c6e0, 0xa3c708, 0xa3c730 } 0xa3c708->Object::~Object { 0xa3c6e0, 0xa3c708 } 0xa3c6e0->Object::~Object { 0xa3c6e0 } exception caught: printer end { } ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xa3c6e0->Object::Object { } Next token is token 'a' (0xa3c6e0 'a') Shifting token 'a' (0xa3c6e0 'a') Entering state 1 Stack now 0 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xa3c6e0 'a') -> $$ = nterm item (0xa3c6e0 'a') Entering state 10 Stack now 0 10 Reading a token 0xa3c708->Object::Object { 0xa3c6e0 } Next token is token 'a' (0xa3c708 'a') Shifting token 'a' (0xa3c708 'a') Entering state 1 Stack now 0 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xa3c708 'a') -> $$ = nterm item (0xa3c708 'a') Entering state 10 Stack now 0 10 10 Reading a token 0xa3c730->Object::Object { 0xa3c6e0, 0xa3c708 } Next token is token 'a' (0xa3c730 'a') Shifting token 'a' (0xa3c730 'a') Entering state 1 Stack now 0 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xa3c730 'a') -> $$ = nterm item (0xa3c730 'a') Entering state 10 Stack now 0 10 10 10 Reading a token 0xa3c758->Object::Object { 0xa3c6e0, 0xa3c708, 0xa3c730 } Next token is token 'a' (0xa3c758 'a') Shifting token 'a' (0xa3c758 'a') Entering state 1 Stack now 0 10 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0xa3c758 'a') -> $$ = nterm item (0xa3c758 'a') Entering state 10 Stack now 0 10 10 10 10 Reading a token 0xa3c780->Object::Object { 0xa3c6e0, 0xa3c708, 0xa3c730, 0xa3c758 } Next token is token 'p' (0xa3c780 'p'Exception caught: cleaning lookahead and stack 0xa3c780->Object::~Object { 0xa3c6e0, 0xa3c708, 0xa3c730, 0xa3c758, 0xa3c780 } 0xa3c758->Object::~Object { 0xa3c6e0, 0xa3c708, 0xa3c730, 0xa3c758 } 0xa3c730->Object::~Object { 0xa3c6e0, 0xa3c708, 0xa3c730 } 0xa3c708->Object::~Object { 0xa3c6e0, 0xa3c708 } 0xa3c6e0->Object::~Object { 0xa3c6e0 } exception caught: printer end { } ./c++.at:1361: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1361: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaT stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaR stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1361: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:63, from input.cc:148: /usr/include/c++/12/bits/stl_uninitialized.h: In function '_ForwardIterator std::__do_uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]': /usr/include/c++/12/bits/stl_uninitialized.h:113:5: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 113 | __do_uninit_copy(_InputIterator __first, _InputIterator __last, | ^~~~~~~~~~~~~~~~ /usr/include/c++/12/bits/stl_uninitialized.h:113:5: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 In file included from /usr/include/c++/12/vector:70: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In static member function 'static _ForwardIterator std::__uninitialized_copy<_TrivialValueTypes>::__uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = std::move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; bool _TrivialValueTypes = false]', inlined from '_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]' at /usr/include/c++/12/bits/stl_uninitialized.h:185:15, inlined from '_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; _Tp = yy::parser::stack_symbol_type]' at /usr/include/c++/12/bits/stl_uninitialized.h:372:37, inlined from '_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = yy::parser::stack_symbol_type*; _ForwardIterator = yy::parser::stack_symbol_type*; _Allocator = allocator]' at /usr/include/c++/12/bits/stl_uninitialized.h:397:2, inlined from 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/vector.tcc:487:3: /usr/include/c++/12/bits/stl_uninitialized.h:137:39: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 137 | { return std::__do_uninit_copy(__first, __last, __result); } | ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~ In static member function 'static _ForwardIterator std::__uninitialized_copy<_TrivialValueTypes>::__uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = std::move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; bool _TrivialValueTypes = false]', inlined from '_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]' at /usr/include/c++/12/bits/stl_uninitialized.h:185:15, inlined from '_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; _Tp = yy::parser::stack_symbol_type]' at /usr/include/c++/12/bits/stl_uninitialized.h:372:37, inlined from '_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = yy::parser::stack_symbol_type*; _ForwardIterator = yy::parser::stack_symbol_type*; _Allocator = allocator]' at /usr/include/c++/12/bits/stl_uninitialized.h:397:2, inlined from 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/vector.tcc:494:3: /usr/include/c++/12/bits/stl_uninitialized.h:137:39: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 137 | { return std::__do_uninit_copy(__first, __last, __result); } | ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19, inlined from 'virtual int yy::parser::parse()' at input.cc:2033:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:1363: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaap stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbeb30cd4->Object::Object { } 0xbeb30d58->Object::Object { 0xbeb30cd4 } 0xbeb30cd4->Object::~Object { 0xbeb30cd4, 0xbeb30d58 } Next token is token 'a' (0xbeb30d58 'a') 0xbeb30cd0->Object::Object { 0xbeb30d58 } 0xbeb30d58->Object::~Object { 0xbeb30cd0, 0xbeb30d58 } Shifting token 'a' (0xbeb30cd0 'a') 0x6790b0->Object::Object { 0xbeb30cd0 } 0xbeb30cd0->Object::~Object { 0x6790b0, 0xbeb30cd0 } Entering state 1 Stack now 0 1 0xbeb30d68->Object::Object { 0x6790b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x6790b0 'a') -> $$ = nterm item (0xbeb30d68 'a') 0x6790b0->Object::~Object { 0x6790b0, 0xbeb30d68 } 0x6790b0->Object::Object { 0xbeb30d68 } 0xbeb30d68->Object::~Object { 0x6790b0, 0xbeb30d68 } Entering state 10 Stack now 0 10 Reading a token 0xbeb30cd4->Object::Object { 0x6790b0 } 0xbeb30d58->Object::Object { 0x6790b0, 0xbeb30cd4 } 0xbeb30cd4->Object::~Object { 0x6790b0, 0xbeb30cd4, 0xbeb30d58 } Next token is token 'a' (0xbeb30d58 'a') 0xbeb30cd0->Object::Object { 0x6790b0, 0xbeb30d58 } 0xbeb30d58->Object::~Object { 0x6790b0, 0xbeb30cd0, 0xbeb30d58 } Shifting token 'a' (0xbeb30cd0 'a') 0x6790c0->Object::Object { 0x6790b0, 0xbeb30cd0 } 0xbeb30cd0->Object::~Object { 0x6790b0, 0x6790c0, 0xbeb30cd0 } Entering state 1 Stack now 0 10 1 0xbeb30d68->Object::Object { 0x6790b0, 0x6790c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x6790c0 'a') -> $$ = nterm item (0xbeb30d68 'a') 0x6790c0->Object::~Object { 0x6790b0, 0x6790c0, 0xbeb30d68 } 0x6790c0->Object::Object { 0x6790b0, 0xbeb30d68 } 0xbeb30d68->Object::~Object { 0x6790b0, 0x6790c0, 0xbeb30d68 } Entering state 10 Stack now 0 10 10 Reading a token 0xbeb30cd4->Object::Object { 0x6790b0, 0x6790c0 } 0xbeb30d58->Object::Object { 0x6790b0, 0x6790c0, 0xbeb30cd4 } 0xbeb30cd4->Object::~Object { 0x6790b0, 0x6790c0, 0xbeb30cd4, 0xbeb30d58 } Next token is token 'a' (0xbeb30d58 'a') 0xbeb30cd0->Object::Object { 0x6790b0, 0x6790c0, 0xbeb30d58 } 0xbeb30d58->Object::~Object { 0x6790b0, 0x6790c0, 0xbeb30cd0, 0xbeb30d58 } Shifting token 'a' (0xbeb30cd0 'a') 0x6790d0->Object::Object { 0x6790b0, 0x6790c0, 0xbeb30cd0 } 0xbeb30cd0->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30cd0 } Entering state 1 Stack now 0 10 10 1 0xbeb30d68->Object::Object { 0x6790b0, 0x6790c0, 0x6790d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x6790d0 'a') -> $$ = nterm item (0xbeb30d68 'a') 0x6790d0->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30d68 } 0x6790d0->Object::Object { 0x6790b0, 0x6790c0, 0xbeb30d68 } 0xbeb30d68->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30d68 } Entering state 10 Stack now 0 10 10 10 Reading a token 0xbeb30cd4->Object::Object { 0x6790b0, 0x6790c0, 0x6790d0 } 0xbeb30d58->Object::Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30cd4 } 0xbeb30cd4->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30cd4, 0xbeb30d58 } Next token is token 'a' (0xbeb30d58 'a') 0xbeb30cd0->Object::Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30d58 } 0xbeb30d58->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30cd0, 0xbeb30d58 } Shifting token 'a' (0xbeb30cd0 'a') 0x6790e0->Object::Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30cd0 } 0xbeb30cd0->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0x6790e0, 0xbeb30cd0 } Entering state 1 Stack now 0 10 10 10 1 0xbeb30d68->Object::Object { 0x6790b0, 0x6790c0, 0x6790d0, 0x6790e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x6790e0 'a') -> $$ = nterm item (0xbeb30d68 'a') 0x6790e0->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0x6790e0, 0xbeb30d68 } 0x6790e0->Object::Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30d68 } 0xbeb30d68->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0x6790e0, 0xbeb30d68 } Entering state 10 Stack now 0 10 10 10 10 Reading a token 0xbeb30cd4->Object::Object { 0x6790b0, 0x6790c0, 0x6790d0, 0x6790e0 } 0xbeb30d58->Object::Object { 0x6790b0, 0x6790c0, 0x6790d0, 0x6790e0, 0xbeb30cd4 } 0xbeb30cd4->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0x6790e0, 0xbeb30cd4, 0xbeb30d58 } Next token is token 'p' (0xbeb30d58 'p'Exception caught: cleaning lookahead and stack 0x6790e0->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0x6790e0, 0xbeb30d58 } 0x6790d0->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30d58 } 0x6790c0->Object::~Object { 0x6790b0, 0x6790c0, 0xbeb30d58 } 0x6790b0->Object::~Object { 0x6790b0, 0xbeb30d58 } 0xbeb30d58->Object::~Object { 0xbeb30d58 } exception caught: printer end { } ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbeb30cd4->Object::Object { } 0xbeb30d58->Object::Object { 0xbeb30cd4 } 0xbeb30cd4->Object::~Object { 0xbeb30cd4, 0xbeb30d58 } Next token is token 'a' (0xbeb30d58 'a') 0xbeb30cd0->Object::Object { 0xbeb30d58 } 0xbeb30d58->Object::~Object { 0xbeb30cd0, 0xbeb30d58 } Shifting token 'a' (0xbeb30cd0 'a') 0x6790b0->Object::Object { 0xbeb30cd0 } 0xbeb30cd0->Object::~Object { 0x6790b0, 0xbeb30cd0 } Entering state 1 Stack now 0 1 0xbeb30d68->Object::Object { 0x6790b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x6790b0 'a') -> $$ = nterm item (0xbeb30d68 'a') 0x6790b0->Object::~Object { 0x6790b0, 0xbeb30d68 } 0x6790b0->Object::Object { 0xbeb30d68 } 0xbeb30d68->Object::~Object { 0x6790b0, 0xbeb30d68 } Entering state 10 Stack now 0 10 Reading a token 0xbeb30cd4->Object::Object { 0x6790b0 } 0xbeb30d58->Object::Object { 0x6790b0, 0xbeb30cd4 } 0xbeb30cd4->Object::~Object { 0x6790b0, 0xbeb30cd4, 0xbeb30d58 } Next token is token 'a' (0xbeb30d58 'a') 0xbeb30cd0->Object::Object { 0x6790b0, 0xbeb30d58 } 0xbeb30d58->Object::~Object { 0x6790b0, 0xbeb30cd0, 0xbeb30d58 } Shifting token 'a' (0xbeb30cd0 'a') 0x6790c0->Object::Object { 0x6790b0, 0xbeb30cd0 } 0xbeb30cd0->Object::~Object { 0x6790b0, 0x6790c0, 0xbeb30cd0 } Entering state 1 Stack now 0 10 1 0xbeb30d68->Object::Object { 0x6790b0, 0x6790c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x6790c0 'a') -> $$ = nterm item (0xbeb30d68 'a') 0x6790c0->Object::~Object { 0x6790b0, 0x6790c0, 0xbeb30d68 } 0x6790c0->Object::Object { 0x6790b0, 0xbeb30d68 } 0xbeb30d68->Object::~Object { 0x6790b0, 0x6790c0, 0xbeb30d68 } Entering state 10 Stack now 0 10 10 Reading a token 0xbeb30cd4->Object::Object { 0x6790b0, 0x6790c0 } 0xbeb30d58->Object::Object { 0x6790b0, 0x6790c0, 0xbeb30cd4 } 0xbeb30cd4->Object::~Object { 0x6790b0, 0x6790c0, 0xbeb30cd4, 0xbeb30d58 } Next token is token 'a' (0xbeb30d58 'a') 0xbeb30cd0->Object::Object { 0x6790b0, 0x6790c0, 0xbeb30d58 } 0xbeb30d58->Object::~Object { 0x6790b0, 0x6790c0, 0xbeb30cd0, 0xbeb30d58 } Shifting token 'a' (0xbeb30cd0 'a') 0x6790d0->Object::Object { 0x6790b0, 0x6790c0, 0xbeb30cd0 } 0xbeb30cd0->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30cd0 } Entering state 1 Stack now 0 10 10 1 0xbeb30d68->Object::Object { 0x6790b0, 0x6790c0, 0x6790d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x6790d0 'a') -> $$ = nterm item (0xbeb30d68 'a') 0x6790d0->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30d68 } 0x6790d0->Object::Object { 0x6790b0, 0x6790c0, 0xbeb30d68 } 0xbeb30d68->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30d68 } Entering state 10 Stack now 0 10 10 10 Reading a token 0xbeb30cd4->Object::Object { 0x6790b0, 0x6790c0, 0x6790d0 } 0xbeb30d58->Object::Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30cd4 } 0xbeb30cd4->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30cd4, 0xbeb30d58 } Next token is token 'a' (0xbeb30d58 'a') 0xbeb30cd0->Object::Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30d58 } 0xbeb30d58->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30cd0, 0xbeb30d58 } Shifting token 'a' (0xbeb30cd0 'a') 0x6790e0->Object::Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30cd0 } 0xbeb30cd0->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0x6790e0, 0xbeb30cd0 } Entering state 1 Stack now 0 10 10 10 1 0xbeb30d68->Object::Object { 0x6790b0, 0x6790c0, 0x6790d0, 0x6790e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x6790e0 'a') -> $$ = nterm item (0xbeb30d68 'a') 0x6790e0->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0x6790e0, 0xbeb30d68 } 0x6790e0->Object::Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30d68 } 0xbeb30d68->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0x6790e0, 0xbeb30d68 } Entering state 10 Stack now 0 10 10 10 10 Reading a token 0xbeb30cd4->Object::Object { 0x6790b0, 0x6790c0, 0x6790d0, 0x6790e0 } 0xbeb30d58->Object::Object { 0x6790b0, 0x6790c0, 0x6790d0, 0x6790e0, 0xbeb30cd4 } 0xbeb30cd4->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0x6790e0, 0xbeb30cd4, 0xbeb30d58 } Next token is token 'p' (0xbeb30d58 'p'Exception caught: cleaning lookahead and stack 0x6790e0->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0x6790e0, 0xbeb30d58 } 0x6790d0->Object::~Object { 0x6790b0, 0x6790c0, 0x6790d0, 0xbeb30d58 } 0x6790c0->Object::~Object { 0x6790b0, 0x6790c0, 0xbeb30d58 } 0x6790b0->Object::~Object { 0x6790b0, 0xbeb30d58 } 0xbeb30d58->Object::~Object { 0xbeb30d58 } exception caught: printer end { } ./c++.at:1363: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1363: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaT stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaR stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1363: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1361: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaap stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x18266e0->Object::Object { } Next token is token 'a' (0x18266e0 'a') Shifting token 'a' (0x18266e0 'a') Entering state 1 Stack now 0 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x18266e0 'a') -> $$ = nterm item (0x18266e0 'a') Entering state 10 Stack now 0 10 Reading a token 0x1826708->Object::Object { 0x18266e0 } Next token is token 'a' (0x1826708 'a') Shifting token 'a' (0x1826708 'a') Entering state 1 Stack now 0 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1826708 'a') -> $$ = nterm item (0x1826708 'a') Entering state 10 Stack now 0 10 10 Reading a token 0x1826730->Object::Object { 0x18266e0, 0x1826708 } Next token is token 'a' (0x1826730 'a') Shifting token 'a' (0x1826730 'a') Entering state 1 Stack now 0 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1826730 'a') -> $$ = nterm item (0x1826730 'a') Entering state 10 Stack now 0 10 10 10 Reading a token 0x1826758->Object::Object { 0x18266e0, 0x1826708, 0x1826730 } Next token is token 'a' (0x1826758 'a') Shifting token 'a' (0x1826758 'a') Entering state 1 Stack now 0 10 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1826758 'a') -> $$ = nterm item (0x1826758 'a') Entering state 10 Stack now 0 10 10 10 10 Reading a token 0x1826780->Object::Object { 0x18266e0, 0x1826708, 0x1826730, 0x1826758 } Next token is token 'p' (0x1826780 'p'Exception caught: cleaning lookahead and stack 0x1826780->Object::~Object { 0x18266e0, 0x1826708, 0x1826730, 0x1826758, 0x1826780 } 0x1826758->Object::~Object { 0x18266e0, 0x1826708, 0x1826730, 0x1826758 } 0x1826730->Object::~Object { 0x18266e0, 0x1826708, 0x1826730 } 0x1826708->Object::~Object { 0x18266e0, 0x1826708 } 0x18266e0->Object::~Object { 0x18266e0 } exception caught: printer end { } ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x18266e0->Object::Object { } Next token is token 'a' (0x18266e0 'a') Shifting token 'a' (0x18266e0 'a') Entering state 1 Stack now 0 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x18266e0 'a') -> $$ = nterm item (0x18266e0 'a') Entering state 10 Stack now 0 10 Reading a token 0x1826708->Object::Object { 0x18266e0 } Next token is token 'a' (0x1826708 'a') Shifting token 'a' (0x1826708 'a') Entering state 1 Stack now 0 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1826708 'a') -> $$ = nterm item (0x1826708 'a') Entering state 10 Stack now 0 10 10 Reading a token 0x1826730->Object::Object { 0x18266e0, 0x1826708 } Next token is token 'a' (0x1826730 'a') Shifting token 'a' (0x1826730 'a') Entering state 1 Stack now 0 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1826730 'a') -> $$ = nterm item (0x1826730 'a') Entering state 10 Stack now 0 10 10 10 Reading a token 0x1826758->Object::Object { 0x18266e0, 0x1826708, 0x1826730 } Next token is token 'a' (0x1826758 'a') Shifting token 'a' (0x1826758 'a') Entering state 1 Stack now 0 10 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1826758 'a') -> $$ = nterm item (0x1826758 'a') Entering state 10 Stack now 0 10 10 10 10 Reading a token 0x1826780->Object::Object { 0x18266e0, 0x1826708, 0x1826730, 0x1826758 } Next token is token 'p' (0x1826780 'p'Exception caught: cleaning lookahead and stack 0x1826780->Object::~Object { 0x18266e0, 0x1826708, 0x1826730, 0x1826758, 0x1826780 } 0x1826758->Object::~Object { 0x18266e0, 0x1826708, 0x1826730, 0x1826758 } 0x1826730->Object::~Object { 0x18266e0, 0x1826708, 0x1826730 } 0x1826708->Object::~Object { 0x18266e0, 0x1826708 } 0x18266e0->Object::~Object { 0x18266e0 } exception caught: printer end { } ./c++.at:1361: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1361: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaT stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaR stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1361: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.cc:148: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at input.cc:1707:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at input.cc:1707:19, inlined from 'virtual int yy::parser::parse()' at input.cc:2039:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:1362: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaap stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbecfec94->Object::Object { } 0xbecfed20->Object::Object { 0xbecfec94 } 0xbecfec94->Object::~Object { 0xbecfec94, 0xbecfed20 } Next token is token 'a' (0xbecfed20 'a') 0xbecfecc0->Object::Object { 0xbecfed20 } 0xbecfec68->Object::Object { 0xbecfecc0, 0xbecfed20 } 0xbecfec68->Object::~Object { 0xbecfec68, 0xbecfecc0, 0xbecfed20 } 0xbecfed20->Object::~Object { 0xbecfecc0, 0xbecfed20 } Shifting token 'a' (0xbecfecc0 'a') 0x1cc60b0->Object::Object { 0xbecfecc0 } 0xbecfec68->Object::Object { 0x1cc60b0, 0xbecfecc0 } 0xbecfec68->Object::~Object { 0x1cc60b0, 0xbecfec68, 0xbecfecc0 } 0xbecfecc0->Object::~Object { 0x1cc60b0, 0xbecfecc0 } Entering state 2 Stack now 0 2 0xbecfed30->Object::Object { 0x1cc60b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1cc60b0 'a') -> $$ = nterm item (0xbecfed30 'a') 0x1cc60b0->Object::~Object { 0x1cc60b0, 0xbecfed30 } 0x1cc60b0->Object::Object { 0xbecfed30 } 0xbecfecfc->Object::Object { 0x1cc60b0, 0xbecfed30 } 0xbecfecfc->Object::~Object { 0x1cc60b0, 0xbecfecfc, 0xbecfed30 } 0xbecfed30->Object::~Object { 0x1cc60b0, 0xbecfed30 } Entering state 11 Stack now 0 11 Reading a token 0xbecfec94->Object::Object { 0x1cc60b0 } 0xbecfed20->Object::Object { 0x1cc60b0, 0xbecfec94 } 0xbecfec94->Object::~Object { 0x1cc60b0, 0xbecfec94, 0xbecfed20 } Next token is token 'a' (0xbecfed20 'a') 0xbecfecc0->Object::Object { 0x1cc60b0, 0xbecfed20 } 0xbecfec68->Object::Object { 0x1cc60b0, 0xbecfecc0, 0xbecfed20 } 0xbecfec68->Object::~Object { 0x1cc60b0, 0xbecfec68, 0xbecfecc0, 0xbecfed20 } 0xbecfed20->Object::~Object { 0x1cc60b0, 0xbecfecc0, 0xbecfed20 } Shifting token 'a' (0xbecfecc0 'a') 0x1cc60c0->Object::Object { 0x1cc60b0, 0xbecfecc0 } 0xbecfec68->Object::Object { 0x1cc60b0, 0x1cc60c0, 0xbecfecc0 } 0xbecfec68->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0xbecfec68, 0xbecfecc0 } 0xbecfecc0->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0xbecfecc0 } Entering state 2 Stack now 0 11 2 0xbecfed30->Object::Object { 0x1cc60b0, 0x1cc60c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1cc60c0 'a') -> $$ = nterm item (0xbecfed30 'a') 0x1cc60c0->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0xbecfed30 } 0x1cc60c0->Object::Object { 0x1cc60b0, 0xbecfed30 } 0xbecfecfc->Object::Object { 0x1cc60b0, 0x1cc60c0, 0xbecfed30 } 0xbecfecfc->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0xbecfecfc, 0xbecfed30 } 0xbecfed30->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0xbecfed30 } Entering state 11 Stack now 0 11 11 Reading a token 0xbecfec94->Object::Object { 0x1cc60b0, 0x1cc60c0 } 0xbecfed20->Object::Object { 0x1cc60b0, 0x1cc60c0, 0xbecfec94 } 0xbecfec94->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0xbecfec94, 0xbecfed20 } Next token is token 'a' (0xbecfed20 'a') 0xbecfecc0->Object::Object { 0x1cc60b0, 0x1cc60c0, 0xbecfed20 } 0xbecfec68->Object::Object { 0x1cc60b0, 0x1cc60c0, 0xbecfecc0, 0xbecfed20 } 0xbecfec68->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0xbecfec68, 0xbecfecc0, 0xbecfed20 } 0xbecfed20->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0xbecfecc0, 0xbecfed20 } Shifting token 'a' (0xbecfecc0 'a') 0x1cc60d0->Object::Object { 0x1cc60b0, 0x1cc60c0, 0xbecfecc0 } 0xbecfec68->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfecc0 } 0xbecfec68->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfec68, 0xbecfecc0 } 0xbecfecc0->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfecc0 } Entering state 2 Stack now 0 11 11 2 0xbecfed30->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1cc60d0 'a') -> $$ = nterm item (0xbecfed30 'a') 0x1cc60d0->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfed30 } 0x1cc60d0->Object::Object { 0x1cc60b0, 0x1cc60c0, 0xbecfed30 } 0xbecfecfc->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfed30 } 0xbecfecfc->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfecfc, 0xbecfed30 } 0xbecfed30->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfed30 } Entering state 11 Stack now 0 11 11 11 Reading a token 0xbecfec94->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0 } 0xbecfed20->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfec94 } 0xbecfec94->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfec94, 0xbecfed20 } Next token is token 'a' (0xbecfed20 'a') 0xbecfecc0->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfed20 } 0xbecfec68->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfecc0, 0xbecfed20 } 0xbecfec68->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfec68, 0xbecfecc0, 0xbecfed20 } 0xbecfed20->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfecc0, 0xbecfed20 } Shifting token 'a' (0xbecfecc0 'a') 0x1cc60e0->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfecc0 } 0xbecfec68->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfecc0 } 0xbecfec68->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfec68, 0xbecfecc0 } 0xbecfecc0->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfecc0 } Entering state 2 Stack now 0 11 11 11 2 0xbecfed30->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1cc60e0 'a') -> $$ = nterm item (0xbecfed30 'a') 0x1cc60e0->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfed30 } 0x1cc60e0->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfed30 } 0xbecfecfc->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfed30 } 0xbecfecfc->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfecfc, 0xbecfed30 } 0xbecfed30->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfed30 } Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xbecfec94->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0 } 0xbecfed20->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfec94 } 0xbecfec94->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfec94, 0xbecfed20 } Next token is token 'p' (0xbecfed20 'p'Exception caught: cleaning lookahead and stack 0x1cc60e0->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfed20 } 0x1cc60d0->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfed20 } 0x1cc60c0->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0xbecfed20 } 0x1cc60b0->Object::~Object { 0x1cc60b0, 0xbecfed20 } 0xbecfed20->Object::~Object { 0xbecfed20 } exception caught: printer end { } ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbecfec94->Object::Object { } 0xbecfed20->Object::Object { 0xbecfec94 } 0xbecfec94->Object::~Object { 0xbecfec94, 0xbecfed20 } Next token is token 'a' (0xbecfed20 'a') 0xbecfecc0->Object::Object { 0xbecfed20 } 0xbecfec68->Object::Object { 0xbecfecc0, 0xbecfed20 } 0xbecfec68->Object::~Object { 0xbecfec68, 0xbecfecc0, 0xbecfed20 } 0xbecfed20->Object::~Object { 0xbecfecc0, 0xbecfed20 } Shifting token 'a' (0xbecfecc0 'a') 0x1cc60b0->Object::Object { 0xbecfecc0 } 0xbecfec68->Object::Object { 0x1cc60b0, 0xbecfecc0 } 0xbecfec68->Object::~Object { 0x1cc60b0, 0xbecfec68, 0xbecfecc0 } 0xbecfecc0->Object::~Object { 0x1cc60b0, 0xbecfecc0 } Entering state 2 Stack now 0 2 0xbecfed30->Object::Object { 0x1cc60b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1cc60b0 'a') -> $$ = nterm item (0xbecfed30 'a') 0x1cc60b0->Object::~Object { 0x1cc60b0, 0xbecfed30 } 0x1cc60b0->Object::Object { 0xbecfed30 } 0xbecfecfc->Object::Object { 0x1cc60b0, 0xbecfed30 } 0xbecfecfc->Object::~Object { 0x1cc60b0, 0xbecfecfc, 0xbecfed30 } 0xbecfed30->Object::~Object { 0x1cc60b0, 0xbecfed30 } Entering state 11 Stack now 0 11 Reading a token 0xbecfec94->Object::Object { 0x1cc60b0 } 0xbecfed20->Object::Object { 0x1cc60b0, 0xbecfec94 } 0xbecfec94->Object::~Object { 0x1cc60b0, 0xbecfec94, 0xbecfed20 } Next token is token 'a' (0xbecfed20 'a') 0xbecfecc0->Object::Object { 0x1cc60b0, 0xbecfed20 } 0xbecfec68->Object::Object { 0x1cc60b0, 0xbecfecc0, 0xbecfed20 } 0xbecfec68->Object::~Object { 0x1cc60b0, 0xbecfec68, 0xbecfecc0, 0xbecfed20 } 0xbecfed20->Object::~Object { 0x1cc60b0, 0xbecfecc0, 0xbecfed20 } Shifting token 'a' (0xbecfecc0 'a') 0x1cc60c0->Object::Object { 0x1cc60b0, 0xbecfecc0 } 0xbecfec68->Object::Object { 0x1cc60b0, 0x1cc60c0, 0xbecfecc0 } 0xbecfec68->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0xbecfec68, 0xbecfecc0 } 0xbecfecc0->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0xbecfecc0 } Entering state 2 Stack now 0 11 2 0xbecfed30->Object::Object { 0x1cc60b0, 0x1cc60c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1cc60c0 'a') -> $$ = nterm item (0xbecfed30 'a') 0x1cc60c0->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0xbecfed30 } 0x1cc60c0->Object::Object { 0x1cc60b0, 0xbecfed30 } 0xbecfecfc->Object::Object { 0x1cc60b0, 0x1cc60c0, 0xbecfed30 } 0xbecfecfc->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0xbecfecfc, 0xbecfed30 } 0xbecfed30->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0xbecfed30 } Entering state 11 Stack now 0 11 11 Reading a token 0xbecfec94->Object::Object { 0x1cc60b0, 0x1cc60c0 } 0xbecfed20->Object::Object { 0x1cc60b0, 0x1cc60c0, 0xbecfec94 } 0xbecfec94->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0xbecfec94, 0xbecfed20 } Next token is token 'a' (0xbecfed20 'a') 0xbecfecc0->Object::Object { 0x1cc60b0, 0x1cc60c0, 0xbecfed20 } 0xbecfec68->Object::Object { 0x1cc60b0, 0x1cc60c0, 0xbecfecc0, 0xbecfed20 } 0xbecfec68->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0xbecfec68, 0xbecfecc0, 0xbecfed20 } 0xbecfed20->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0xbecfecc0, 0xbecfed20 } Shifting token 'a' (0xbecfecc0 'a') 0x1cc60d0->Object::Object { 0x1cc60b0, 0x1cc60c0, 0xbecfecc0 } 0xbecfec68->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfecc0 } 0xbecfec68->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfec68, 0xbecfecc0 } 0xbecfecc0->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfecc0 } Entering state 2 Stack now 0 11 11 2 0xbecfed30->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1cc60d0 'a') -> $$ = nterm item (0xbecfed30 'a') 0x1cc60d0->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfed30 } 0x1cc60d0->Object::Object { 0x1cc60b0, 0x1cc60c0, 0xbecfed30 } 0xbecfecfc->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfed30 } 0xbecfecfc->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfecfc, 0xbecfed30 } 0xbecfed30->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfed30 } Entering state 11 Stack now 0 11 11 11 Reading a token 0xbecfec94->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0 } 0xbecfed20->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfec94 } 0xbecfec94->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfec94, 0xbecfed20 } Next token is token 'a' (0xbecfed20 'a') 0xbecfecc0->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfed20 } 0xbecfec68->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfecc0, 0xbecfed20 } 0xbecfec68->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfec68, 0xbecfecc0, 0xbecfed20 } 0xbecfed20->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfecc0, 0xbecfed20 } Shifting token 'a' (0xbecfecc0 'a') 0x1cc60e0->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfecc0 } 0xbecfec68->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfecc0 } 0xbecfec68->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfec68, 0xbecfecc0 } 0xbecfecc0->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfecc0 } Entering state 2 Stack now 0 11 11 11 2 0xbecfed30->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1cc60e0 'a') -> $$ = nterm item (0xbecfed30 'a') 0x1cc60e0->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfed30 } 0x1cc60e0->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfed30 } 0xbecfecfc->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfed30 } 0xbecfecfc->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfecfc, 0xbecfed30 } 0xbecfed30->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfed30 } Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xbecfec94->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0 } 0xbecfed20->Object::Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfec94 } 0xbecfec94->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfec94, 0xbecfed20 } Next token is token 'p' (0xbecfed20 'p'Exception caught: cleaning lookahead and stack 0x1cc60e0->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0x1cc60e0, 0xbecfed20 } 0x1cc60d0->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0x1cc60d0, 0xbecfed20 } 0x1cc60c0->Object::~Object { 0x1cc60b0, 0x1cc60c0, 0xbecfed20 } 0x1cc60b0->Object::~Object { 0x1cc60b0, 0xbecfed20 } 0xbecfed20->Object::~Object { 0xbecfed20 } exception caught: printer end { } ./c++.at:1362: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1362: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaT stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaR stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1362: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.cc:148: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at input.cc:1707:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at input.cc:1707:19, inlined from 'virtual int yy::parser::parse()' at input.cc:2033:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:1363: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaap stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbe831c94->Object::Object { } 0xbe831d20->Object::Object { 0xbe831c94 } 0xbe831c94->Object::~Object { 0xbe831c94, 0xbe831d20 } Next token is token 'a' (0xbe831d20 'a') 0xbe831cc0->Object::Object { 0xbe831d20 } 0xbe831c68->Object::Object { 0xbe831cc0, 0xbe831d20 } 0xbe831c68->Object::~Object { 0xbe831c68, 0xbe831cc0, 0xbe831d20 } 0xbe831d20->Object::~Object { 0xbe831cc0, 0xbe831d20 } Shifting token 'a' (0xbe831cc0 'a') 0x122a0b0->Object::Object { 0xbe831cc0 } 0xbe831c68->Object::Object { 0x122a0b0, 0xbe831cc0 } 0xbe831c68->Object::~Object { 0x122a0b0, 0xbe831c68, 0xbe831cc0 } 0xbe831cc0->Object::~Object { 0x122a0b0, 0xbe831cc0 } Entering state 1 Stack now 0 1 0xbe831d30->Object::Object { 0x122a0b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x122a0b0 'a') -> $$ = nterm item (0xbe831d30 'a') 0x122a0b0->Object::~Object { 0x122a0b0, 0xbe831d30 } 0x122a0b0->Object::Object { 0xbe831d30 } 0xbe831cfc->Object::Object { 0x122a0b0, 0xbe831d30 } 0xbe831cfc->Object::~Object { 0x122a0b0, 0xbe831cfc, 0xbe831d30 } 0xbe831d30->Object::~Object { 0x122a0b0, 0xbe831d30 } Entering state 10 Stack now 0 10 Reading a token 0xbe831c94->Object::Object { 0x122a0b0 } 0xbe831d20->Object::Object { 0x122a0b0, 0xbe831c94 } 0xbe831c94->Object::~Object { 0x122a0b0, 0xbe831c94, 0xbe831d20 } Next token is token 'a' (0xbe831d20 'a') 0xbe831cc0->Object::Object { 0x122a0b0, 0xbe831d20 } 0xbe831c68->Object::Object { 0x122a0b0, 0xbe831cc0, 0xbe831d20 } 0xbe831c68->Object::~Object { 0x122a0b0, 0xbe831c68, 0xbe831cc0, 0xbe831d20 } 0xbe831d20->Object::~Object { 0x122a0b0, 0xbe831cc0, 0xbe831d20 } Shifting token 'a' (0xbe831cc0 'a') 0x122a0c0->Object::Object { 0x122a0b0, 0xbe831cc0 } 0xbe831c68->Object::Object { 0x122a0b0, 0x122a0c0, 0xbe831cc0 } 0xbe831c68->Object::~Object { 0x122a0b0, 0x122a0c0, 0xbe831c68, 0xbe831cc0 } 0xbe831cc0->Object::~Object { 0x122a0b0, 0x122a0c0, 0xbe831cc0 } Entering state 1 Stack now 0 10 1 0xbe831d30->Object::Object { 0x122a0b0, 0x122a0c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x122a0c0 'a') -> $$ = nterm item (0xbe831d30 'a') 0x122a0c0->Object::~Object { 0x122a0b0, 0x122a0c0, 0xbe831d30 } 0x122a0c0->Object::Object { 0x122a0b0, 0xbe831d30 } 0xbe831cfc->Object::Object { 0x122a0b0, 0x122a0c0, 0xbe831d30 } 0xbe831cfc->Object::~Object { 0x122a0b0, 0x122a0c0, 0xbe831cfc, 0xbe831d30 } 0xbe831d30->Object::~Object { 0x122a0b0, 0x122a0c0, 0xbe831d30 } Entering state 10 Stack now 0 10 10 Reading a token 0xbe831c94->Object::Object { 0x122a0b0, 0x122a0c0 } 0xbe831d20->Object::Object { 0x122a0b0, 0x122a0c0, 0xbe831c94 } 0xbe831c94->Object::~Object { 0x122a0b0, 0x122a0c0, 0xbe831c94, 0xbe831d20 } Next token is token 'a' (0xbe831d20 'a') 0xbe831cc0->Object::Object { 0x122a0b0, 0x122a0c0, 0xbe831d20 } 0xbe831c68->Object::Object { 0x122a0b0, 0x122a0c0, 0xbe831cc0, 0xbe831d20 } 0xbe831c68->Object::~Object { 0x122a0b0, 0x122a0c0, 0xbe831c68, 0xbe831cc0, 0xbe831d20 } 0xbe831d20->Object::~Object { 0x122a0b0, 0x122a0c0, 0xbe831cc0, 0xbe831d20 } Shifting token 'a' (0xbe831cc0 'a') 0x122a0d0->Object::Object { 0x122a0b0, 0x122a0c0, 0xbe831cc0 } 0xbe831c68->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831cc0 } 0xbe831c68->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831c68, 0xbe831cc0 } 0xbe831cc0->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831cc0 } Entering state 1 Stack now 0 10 10 1 0xbe831d30->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x122a0d0 'a') -> $$ = nterm item (0xbe831d30 'a') 0x122a0d0->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831d30 } 0x122a0d0->Object::Object { 0x122a0b0, 0x122a0c0, 0xbe831d30 } 0xbe831cfc->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831d30 } 0xbe831cfc->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831cfc, 0xbe831d30 } 0xbe831d30->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831d30 } Entering state 10 Stack now 0 10 10 10 Reading a token 0xbe831c94->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0 } 0xbe831d20->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831c94 } 0xbe831c94->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831c94, 0xbe831d20 } Next token is token 'a' (0xbe831d20 'a') 0xbe831cc0->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831d20 } 0xbe831c68->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831cc0, 0xbe831d20 } 0xbe831c68->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831c68, 0xbe831cc0, 0xbe831d20 } 0xbe831d20->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831cc0, 0xbe831d20 } Shifting token 'a' (0xbe831cc0 'a') 0x122a0e0->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831cc0 } 0xbe831c68->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831cc0 } 0xbe831c68->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831c68, 0xbe831cc0 } 0xbe831cc0->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831cc0 } Entering state 1 Stack now 0 10 10 10 1 0xbe831d30->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x122a0e0 'a') -> $$ = nterm item (0xbe831d30 'a') 0x122a0e0->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831d30 } 0x122a0e0->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831d30 } 0xbe831cfc->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831d30 } 0xbe831cfc->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831cfc, 0xbe831d30 } 0xbe831d30->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831d30 } Entering state 10 Stack now 0 10 10 10 10 Reading a token 0xbe831c94->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0 } 0xbe831d20->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831c94 } 0xbe831c94->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831c94, 0xbe831d20 } Next token is token 'p' (0xbe831d20 'p'Exception caught: cleaning lookahead and stack 0x122a0e0->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831d20 } 0x122a0d0->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831d20 } 0x122a0c0->Object::~Object { 0x122a0b0, 0x122a0c0, 0xbe831d20 } 0x122a0b0->Object::~Object { 0x122a0b0, 0xbe831d20 } 0xbe831d20->Object::~Object { 0xbe831d20 } exception caught: printer end { } ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbe831c94->Object::Object { } 0xbe831d20->Object::Object { 0xbe831c94 } 0xbe831c94->Object::~Object { 0xbe831c94, 0xbe831d20 } Next token is token 'a' (0xbe831d20 'a') 0xbe831cc0->Object::Object { 0xbe831d20 } 0xbe831c68->Object::Object { 0xbe831cc0, 0xbe831d20 } 0xbe831c68->Object::~Object { 0xbe831c68, 0xbe831cc0, 0xbe831d20 } 0xbe831d20->Object::~Object { 0xbe831cc0, 0xbe831d20 } Shifting token 'a' (0xbe831cc0 'a') 0x122a0b0->Object::Object { 0xbe831cc0 } 0xbe831c68->Object::Object { 0x122a0b0, 0xbe831cc0 } 0xbe831c68->Object::~Object { 0x122a0b0, 0xbe831c68, 0xbe831cc0 } 0xbe831cc0->Object::~Object { 0x122a0b0, 0xbe831cc0 } Entering state 1 Stack now 0 1 0xbe831d30->Object::Object { 0x122a0b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x122a0b0 'a') -> $$ = nterm item (0xbe831d30 'a') 0x122a0b0->Object::~Object { 0x122a0b0, 0xbe831d30 } 0x122a0b0->Object::Object { 0xbe831d30 } 0xbe831cfc->Object::Object { 0x122a0b0, 0xbe831d30 } 0xbe831cfc->Object::~Object { 0x122a0b0, 0xbe831cfc, 0xbe831d30 } 0xbe831d30->Object::~Object { 0x122a0b0, 0xbe831d30 } Entering state 10 Stack now 0 10 Reading a token 0xbe831c94->Object::Object { 0x122a0b0 } 0xbe831d20->Object::Object { 0x122a0b0, 0xbe831c94 } 0xbe831c94->Object::~Object { 0x122a0b0, 0xbe831c94, 0xbe831d20 } Next token is token 'a' (0xbe831d20 'a') 0xbe831cc0->Object::Object { 0x122a0b0, 0xbe831d20 } 0xbe831c68->Object::Object { 0x122a0b0, 0xbe831cc0, 0xbe831d20 } 0xbe831c68->Object::~Object { 0x122a0b0, 0xbe831c68, 0xbe831cc0, 0xbe831d20 } 0xbe831d20->Object::~Object { 0x122a0b0, 0xbe831cc0, 0xbe831d20 } Shifting token 'a' (0xbe831cc0 'a') 0x122a0c0->Object::Object { 0x122a0b0, 0xbe831cc0 } 0xbe831c68->Object::Object { 0x122a0b0, 0x122a0c0, 0xbe831cc0 } 0xbe831c68->Object::~Object { 0x122a0b0, 0x122a0c0, 0xbe831c68, 0xbe831cc0 } 0xbe831cc0->Object::~Object { 0x122a0b0, 0x122a0c0, 0xbe831cc0 } Entering state 1 Stack now 0 10 1 0xbe831d30->Object::Object { 0x122a0b0, 0x122a0c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x122a0c0 'a') -> $$ = nterm item (0xbe831d30 'a') 0x122a0c0->Object::~Object { 0x122a0b0, 0x122a0c0, 0xbe831d30 } 0x122a0c0->Object::Object { 0x122a0b0, 0xbe831d30 } 0xbe831cfc->Object::Object { 0x122a0b0, 0x122a0c0, 0xbe831d30 } 0xbe831cfc->Object::~Object { 0x122a0b0, 0x122a0c0, 0xbe831cfc, 0xbe831d30 } 0xbe831d30->Object::~Object { 0x122a0b0, 0x122a0c0, 0xbe831d30 } Entering state 10 Stack now 0 10 10 Reading a token 0xbe831c94->Object::Object { 0x122a0b0, 0x122a0c0 } 0xbe831d20->Object::Object { 0x122a0b0, 0x122a0c0, 0xbe831c94 } 0xbe831c94->Object::~Object { 0x122a0b0, 0x122a0c0, 0xbe831c94, 0xbe831d20 } Next token is token 'a' (0xbe831d20 'a') 0xbe831cc0->Object::Object { 0x122a0b0, 0x122a0c0, 0xbe831d20 } 0xbe831c68->Object::Object { 0x122a0b0, 0x122a0c0, 0xbe831cc0, 0xbe831d20 } 0xbe831c68->Object::~Object { 0x122a0b0, 0x122a0c0, 0xbe831c68, 0xbe831cc0, 0xbe831d20 } 0xbe831d20->Object::~Object { 0x122a0b0, 0x122a0c0, 0xbe831cc0, 0xbe831d20 } Shifting token 'a' (0xbe831cc0 'a') 0x122a0d0->Object::Object { 0x122a0b0, 0x122a0c0, 0xbe831cc0 } 0xbe831c68->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831cc0 } 0xbe831c68->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831c68, 0xbe831cc0 } 0xbe831cc0->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831cc0 } Entering state 1 Stack now 0 10 10 1 0xbe831d30->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x122a0d0 'a') -> $$ = nterm item (0xbe831d30 'a') 0x122a0d0->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831d30 } 0x122a0d0->Object::Object { 0x122a0b0, 0x122a0c0, 0xbe831d30 } 0xbe831cfc->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831d30 } 0xbe831cfc->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831cfc, 0xbe831d30 } 0xbe831d30->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831d30 } Entering state 10 Stack now 0 10 10 10 Reading a token 0xbe831c94->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0 } 0xbe831d20->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831c94 } 0xbe831c94->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831c94, 0xbe831d20 } Next token is token 'a' (0xbe831d20 'a') 0xbe831cc0->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831d20 } 0xbe831c68->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831cc0, 0xbe831d20 } 0xbe831c68->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831c68, 0xbe831cc0, 0xbe831d20 } 0xbe831d20->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831cc0, 0xbe831d20 } Shifting token 'a' (0xbe831cc0 'a') 0x122a0e0->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831cc0 } 0xbe831c68->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831cc0 } 0xbe831c68->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831c68, 0xbe831cc0 } 0xbe831cc0->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831cc0 } Entering state 1 Stack now 0 10 10 10 1 0xbe831d30->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x122a0e0 'a') -> $$ = nterm item (0xbe831d30 'a') 0x122a0e0->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831d30 } 0x122a0e0->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831d30 } 0xbe831cfc->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831d30 } 0xbe831cfc->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831cfc, 0xbe831d30 } 0xbe831d30->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831d30 } Entering state 10 Stack now 0 10 10 10 10 Reading a token 0xbe831c94->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0 } 0xbe831d20->Object::Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831c94 } 0xbe831c94->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831c94, 0xbe831d20 } Next token is token 'p' (0xbe831d20 'p'Exception caught: cleaning lookahead and stack 0x122a0e0->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0x122a0e0, 0xbe831d20 } 0x122a0d0->Object::~Object { 0x122a0b0, 0x122a0c0, 0x122a0d0, 0xbe831d20 } 0x122a0c0->Object::~Object { 0x122a0b0, 0x122a0c0, 0xbe831d20 } 0x122a0b0->Object::~Object { 0x122a0b0, 0xbe831d20 } 0xbe831d20->Object::~Object { 0xbe831d20 } exception caught: printer end { } ./c++.at:1363: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1363: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaT stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaR stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1363: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1361: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaap stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x1af96e0->Object::Object { } Next token is token 'a' (0x1af96e0 'a') Shifting token 'a' (0x1af96e0 'a') Entering state 1 Stack now 0 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1af96e0 'a') -> $$ = nterm item (0x1af96e0 'a') Entering state 10 Stack now 0 10 Reading a token 0x1af9708->Object::Object { 0x1af96e0 } Next token is token 'a' (0x1af9708 'a') Shifting token 'a' (0x1af9708 'a') Entering state 1 Stack now 0 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1af9708 'a') -> $$ = nterm item (0x1af9708 'a') Entering state 10 Stack now 0 10 10 Reading a token 0x1af9730->Object::Object { 0x1af96e0, 0x1af9708 } Next token is token 'a' (0x1af9730 'a') Shifting token 'a' (0x1af9730 'a') Entering state 1 Stack now 0 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1af9730 'a') -> $$ = nterm item (0x1af9730 'a') Entering state 10 Stack now 0 10 10 10 Reading a token 0x1af9758->Object::Object { 0x1af96e0, 0x1af9708, 0x1af9730 } Next token is token 'a' (0x1af9758 'a') Shifting token 'a' (0x1af9758 'a') Entering state 1 Stack now 0 10 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1af9758 'a') -> $$ = nterm item (0x1af9758 'a') Entering state 10 Stack now 0 10 10 10 10 Reading a token 0x1af9780->Object::Object { 0x1af96e0, 0x1af9708, 0x1af9730, 0x1af9758 } Next token is token 'p' (0x1af9780 'p'Exception caught: cleaning lookahead and stack 0x1af9780->Object::~Object { 0x1af96e0, 0x1af9708, 0x1af9730, 0x1af9758, 0x1af9780 } 0x1af9758->Object::~Object { 0x1af96e0, 0x1af9708, 0x1af9730, 0x1af9758 } 0x1af9730->Object::~Object { 0x1af96e0, 0x1af9708, 0x1af9730 } 0x1af9708->Object::~Object { 0x1af96e0, 0x1af9708 } 0x1af96e0->Object::~Object { 0x1af96e0 } exception caught: printer end { } ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x1af96e0->Object::Object { } Next token is token 'a' (0x1af96e0 'a') Shifting token 'a' (0x1af96e0 'a') Entering state 1 Stack now 0 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1af96e0 'a') -> $$ = nterm item (0x1af96e0 'a') Entering state 10 Stack now 0 10 Reading a token 0x1af9708->Object::Object { 0x1af96e0 } Next token is token 'a' (0x1af9708 'a') Shifting token 'a' (0x1af9708 'a') Entering state 1 Stack now 0 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1af9708 'a') -> $$ = nterm item (0x1af9708 'a') Entering state 10 Stack now 0 10 10 Reading a token 0x1af9730->Object::Object { 0x1af96e0, 0x1af9708 } Next token is token 'a' (0x1af9730 'a') Shifting token 'a' (0x1af9730 'a') Entering state 1 Stack now 0 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1af9730 'a') -> $$ = nterm item (0x1af9730 'a') Entering state 10 Stack now 0 10 10 10 Reading a token 0x1af9758->Object::Object { 0x1af96e0, 0x1af9708, 0x1af9730 } Next token is token 'a' (0x1af9758 'a') Shifting token 'a' (0x1af9758 'a') Entering state 1 Stack now 0 10 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1af9758 'a') -> $$ = nterm item (0x1af9758 'a') Entering state 10 Stack now 0 10 10 10 10 Reading a token 0x1af9780->Object::Object { 0x1af96e0, 0x1af9708, 0x1af9730, 0x1af9758 } Next token is token 'p' (0x1af9780 'p'Exception caught: cleaning lookahead and stack 0x1af9780->Object::~Object { 0x1af96e0, 0x1af9708, 0x1af9730, 0x1af9758, 0x1af9780 } 0x1af9758->Object::~Object { 0x1af96e0, 0x1af9708, 0x1af9730, 0x1af9758 } 0x1af9730->Object::~Object { 0x1af96e0, 0x1af9708, 0x1af9730 } 0x1af9708->Object::~Object { 0x1af96e0, 0x1af9708 } 0x1af96e0->Object::~Object { 0x1af96e0 } exception caught: printer end { } ./c++.at:1361: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1361: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaT stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaR stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1361: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.cc:148: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at input.cc:1707:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at input.cc:1707:19, inlined from 'virtual int yy::parser::parse()' at input.cc:2039:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:1362: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaap stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbeb3cc94->Object::Object { } 0xbeb3cd20->Object::Object { 0xbeb3cc94 } 0xbeb3cc94->Object::~Object { 0xbeb3cc94, 0xbeb3cd20 } Next token is token 'a' (0xbeb3cd20 'a') 0xbeb3ccc0->Object::Object { 0xbeb3cd20 } 0xbeb3cc68->Object::Object { 0xbeb3ccc0, 0xbeb3cd20 } 0xbeb3cc68->Object::~Object { 0xbeb3cc68, 0xbeb3ccc0, 0xbeb3cd20 } 0xbeb3cd20->Object::~Object { 0xbeb3ccc0, 0xbeb3cd20 } Shifting token 'a' (0xbeb3ccc0 'a') 0x1ce80b0->Object::Object { 0xbeb3ccc0 } 0xbeb3cc68->Object::Object { 0x1ce80b0, 0xbeb3ccc0 } 0xbeb3cc68->Object::~Object { 0x1ce80b0, 0xbeb3cc68, 0xbeb3ccc0 } 0xbeb3ccc0->Object::~Object { 0x1ce80b0, 0xbeb3ccc0 } Entering state 2 Stack now 0 2 0xbeb3cd30->Object::Object { 0x1ce80b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1ce80b0 'a') -> $$ = nterm item (0xbeb3cd30 'a') 0x1ce80b0->Object::~Object { 0x1ce80b0, 0xbeb3cd30 } 0x1ce80b0->Object::Object { 0xbeb3cd30 } 0xbeb3ccfc->Object::Object { 0x1ce80b0, 0xbeb3cd30 } 0xbeb3ccfc->Object::~Object { 0x1ce80b0, 0xbeb3ccfc, 0xbeb3cd30 } 0xbeb3cd30->Object::~Object { 0x1ce80b0, 0xbeb3cd30 } Entering state 11 Stack now 0 11 Reading a token 0xbeb3cc94->Object::Object { 0x1ce80b0 } 0xbeb3cd20->Object::Object { 0x1ce80b0, 0xbeb3cc94 } 0xbeb3cc94->Object::~Object { 0x1ce80b0, 0xbeb3cc94, 0xbeb3cd20 } Next token is token 'a' (0xbeb3cd20 'a') 0xbeb3ccc0->Object::Object { 0x1ce80b0, 0xbeb3cd20 } 0xbeb3cc68->Object::Object { 0x1ce80b0, 0xbeb3ccc0, 0xbeb3cd20 } 0xbeb3cc68->Object::~Object { 0x1ce80b0, 0xbeb3cc68, 0xbeb3ccc0, 0xbeb3cd20 } 0xbeb3cd20->Object::~Object { 0x1ce80b0, 0xbeb3ccc0, 0xbeb3cd20 } Shifting token 'a' (0xbeb3ccc0 'a') 0x1ce80c0->Object::Object { 0x1ce80b0, 0xbeb3ccc0 } 0xbeb3cc68->Object::Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3ccc0 } 0xbeb3cc68->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cc68, 0xbeb3ccc0 } 0xbeb3ccc0->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3ccc0 } Entering state 2 Stack now 0 11 2 0xbeb3cd30->Object::Object { 0x1ce80b0, 0x1ce80c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1ce80c0 'a') -> $$ = nterm item (0xbeb3cd30 'a') 0x1ce80c0->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cd30 } 0x1ce80c0->Object::Object { 0x1ce80b0, 0xbeb3cd30 } 0xbeb3ccfc->Object::Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cd30 } 0xbeb3ccfc->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3ccfc, 0xbeb3cd30 } 0xbeb3cd30->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cd30 } Entering state 11 Stack now 0 11 11 Reading a token 0xbeb3cc94->Object::Object { 0x1ce80b0, 0x1ce80c0 } 0xbeb3cd20->Object::Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cc94 } 0xbeb3cc94->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cc94, 0xbeb3cd20 } Next token is token 'a' (0xbeb3cd20 'a') 0xbeb3ccc0->Object::Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cd20 } 0xbeb3cc68->Object::Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3ccc0, 0xbeb3cd20 } 0xbeb3cc68->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cc68, 0xbeb3ccc0, 0xbeb3cd20 } 0xbeb3cd20->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3ccc0, 0xbeb3cd20 } Shifting token 'a' (0xbeb3ccc0 'a') 0x1ce80d0->Object::Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3ccc0 } 0xbeb3cc68->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3ccc0 } 0xbeb3cc68->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cc68, 0xbeb3ccc0 } 0xbeb3ccc0->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3ccc0 } Entering state 2 Stack now 0 11 11 2 0xbeb3cd30->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1ce80d0 'a') -> $$ = nterm item (0xbeb3cd30 'a') 0x1ce80d0->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cd30 } 0x1ce80d0->Object::Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cd30 } 0xbeb3ccfc->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cd30 } 0xbeb3ccfc->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3ccfc, 0xbeb3cd30 } 0xbeb3cd30->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cd30 } Entering state 11 Stack now 0 11 11 11 Reading a token 0xbeb3cc94->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0 } 0xbeb3cd20->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cc94 } 0xbeb3cc94->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cc94, 0xbeb3cd20 } Next token is token 'a' (0xbeb3cd20 'a') 0xbeb3ccc0->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cd20 } 0xbeb3cc68->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3ccc0, 0xbeb3cd20 } 0xbeb3cc68->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cc68, 0xbeb3ccc0, 0xbeb3cd20 } 0xbeb3cd20->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3ccc0, 0xbeb3cd20 } Shifting token 'a' (0xbeb3ccc0 'a') 0x1ce80e0->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3ccc0 } 0xbeb3cc68->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3ccc0 } 0xbeb3cc68->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3cc68, 0xbeb3ccc0 } 0xbeb3ccc0->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3ccc0 } Entering state 2 Stack now 0 11 11 11 2 0xbeb3cd30->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1ce80e0 'a') -> $$ = nterm item (0xbeb3cd30 'a') 0x1ce80e0->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3cd30 } 0x1ce80e0->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cd30 } 0xbeb3ccfc->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3cd30 } 0xbeb3ccfc->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3ccfc, 0xbeb3cd30 } 0xbeb3cd30->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3cd30 } Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xbeb3cc94->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0 } 0xbeb3cd20->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3cc94 } 0xbeb3cc94->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3cc94, 0xbeb3cd20 } Next token is token 'p' (0xbeb3cd20 'p'Exception caught: cleaning lookahead and stack 0x1ce80e0->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3cd20 } 0x1ce80d0->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cd20 } 0x1ce80c0->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cd20 } 0x1ce80b0->Object::~Object { 0x1ce80b0, 0xbeb3cd20 } 0xbeb3cd20->Object::~Object { 0xbeb3cd20 } exception caught: printer end { } ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbeb3cc94->Object::Object { } 0xbeb3cd20->Object::Object { 0xbeb3cc94 } 0xbeb3cc94->Object::~Object { 0xbeb3cc94, 0xbeb3cd20 } Next token is token 'a' (0xbeb3cd20 'a') 0xbeb3ccc0->Object::Object { 0xbeb3cd20 } 0xbeb3cc68->Object::Object { 0xbeb3ccc0, 0xbeb3cd20 } 0xbeb3cc68->Object::~Object { 0xbeb3cc68, 0xbeb3ccc0, 0xbeb3cd20 } 0xbeb3cd20->Object::~Object { 0xbeb3ccc0, 0xbeb3cd20 } Shifting token 'a' (0xbeb3ccc0 'a') 0x1ce80b0->Object::Object { 0xbeb3ccc0 } 0xbeb3cc68->Object::Object { 0x1ce80b0, 0xbeb3ccc0 } 0xbeb3cc68->Object::~Object { 0x1ce80b0, 0xbeb3cc68, 0xbeb3ccc0 } 0xbeb3ccc0->Object::~Object { 0x1ce80b0, 0xbeb3ccc0 } Entering state 2 Stack now 0 2 0xbeb3cd30->Object::Object { 0x1ce80b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1ce80b0 'a') -> $$ = nterm item (0xbeb3cd30 'a') 0x1ce80b0->Object::~Object { 0x1ce80b0, 0xbeb3cd30 } 0x1ce80b0->Object::Object { 0xbeb3cd30 } 0xbeb3ccfc->Object::Object { 0x1ce80b0, 0xbeb3cd30 } 0xbeb3ccfc->Object::~Object { 0x1ce80b0, 0xbeb3ccfc, 0xbeb3cd30 } 0xbeb3cd30->Object::~Object { 0x1ce80b0, 0xbeb3cd30 } Entering state 11 Stack now 0 11 Reading a token 0xbeb3cc94->Object::Object { 0x1ce80b0 } 0xbeb3cd20->Object::Object { 0x1ce80b0, 0xbeb3cc94 } 0xbeb3cc94->Object::~Object { 0x1ce80b0, 0xbeb3cc94, 0xbeb3cd20 } Next token is token 'a' (0xbeb3cd20 'a') 0xbeb3ccc0->Object::Object { 0x1ce80b0, 0xbeb3cd20 } 0xbeb3cc68->Object::Object { 0x1ce80b0, 0xbeb3ccc0, 0xbeb3cd20 } 0xbeb3cc68->Object::~Object { 0x1ce80b0, 0xbeb3cc68, 0xbeb3ccc0, 0xbeb3cd20 } 0xbeb3cd20->Object::~Object { 0x1ce80b0, 0xbeb3ccc0, 0xbeb3cd20 } Shifting token 'a' (0xbeb3ccc0 'a') 0x1ce80c0->Object::Object { 0x1ce80b0, 0xbeb3ccc0 } 0xbeb3cc68->Object::Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3ccc0 } 0xbeb3cc68->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cc68, 0xbeb3ccc0 } 0xbeb3ccc0->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3ccc0 } Entering state 2 Stack now 0 11 2 0xbeb3cd30->Object::Object { 0x1ce80b0, 0x1ce80c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1ce80c0 'a') -> $$ = nterm item (0xbeb3cd30 'a') 0x1ce80c0->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cd30 } 0x1ce80c0->Object::Object { 0x1ce80b0, 0xbeb3cd30 } 0xbeb3ccfc->Object::Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cd30 } 0xbeb3ccfc->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3ccfc, 0xbeb3cd30 } 0xbeb3cd30->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cd30 } Entering state 11 Stack now 0 11 11 Reading a token 0xbeb3cc94->Object::Object { 0x1ce80b0, 0x1ce80c0 } 0xbeb3cd20->Object::Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cc94 } 0xbeb3cc94->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cc94, 0xbeb3cd20 } Next token is token 'a' (0xbeb3cd20 'a') 0xbeb3ccc0->Object::Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cd20 } 0xbeb3cc68->Object::Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3ccc0, 0xbeb3cd20 } 0xbeb3cc68->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cc68, 0xbeb3ccc0, 0xbeb3cd20 } 0xbeb3cd20->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3ccc0, 0xbeb3cd20 } Shifting token 'a' (0xbeb3ccc0 'a') 0x1ce80d0->Object::Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3ccc0 } 0xbeb3cc68->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3ccc0 } 0xbeb3cc68->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cc68, 0xbeb3ccc0 } 0xbeb3ccc0->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3ccc0 } Entering state 2 Stack now 0 11 11 2 0xbeb3cd30->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1ce80d0 'a') -> $$ = nterm item (0xbeb3cd30 'a') 0x1ce80d0->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cd30 } 0x1ce80d0->Object::Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cd30 } 0xbeb3ccfc->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cd30 } 0xbeb3ccfc->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3ccfc, 0xbeb3cd30 } 0xbeb3cd30->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cd30 } Entering state 11 Stack now 0 11 11 11 Reading a token 0xbeb3cc94->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0 } 0xbeb3cd20->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cc94 } 0xbeb3cc94->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cc94, 0xbeb3cd20 } Next token is token 'a' (0xbeb3cd20 'a') 0xbeb3ccc0->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cd20 } 0xbeb3cc68->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3ccc0, 0xbeb3cd20 } 0xbeb3cc68->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cc68, 0xbeb3ccc0, 0xbeb3cd20 } 0xbeb3cd20->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3ccc0, 0xbeb3cd20 } Shifting token 'a' (0xbeb3ccc0 'a') 0x1ce80e0->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3ccc0 } 0xbeb3cc68->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3ccc0 } 0xbeb3cc68->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3cc68, 0xbeb3ccc0 } 0xbeb3ccc0->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3ccc0 } Entering state 2 Stack now 0 11 11 11 2 0xbeb3cd30->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1ce80e0 'a') -> $$ = nterm item (0xbeb3cd30 'a') 0x1ce80e0->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3cd30 } 0x1ce80e0->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cd30 } 0xbeb3ccfc->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3cd30 } 0xbeb3ccfc->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3ccfc, 0xbeb3cd30 } 0xbeb3cd30->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3cd30 } Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xbeb3cc94->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0 } 0xbeb3cd20->Object::Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3cc94 } 0xbeb3cc94->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3cc94, 0xbeb3cd20 } Next token is token 'p' (0xbeb3cd20 'p'Exception caught: cleaning lookahead and stack 0x1ce80e0->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0x1ce80e0, 0xbeb3cd20 } 0x1ce80d0->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0x1ce80d0, 0xbeb3cd20 } 0x1ce80c0->Object::~Object { 0x1ce80b0, 0x1ce80c0, 0xbeb3cd20 } 0x1ce80b0->Object::~Object { 0x1ce80b0, 0xbeb3cd20 } 0xbeb3cd20->Object::~Object { 0xbeb3cd20 } exception caught: printer end { } ./c++.at:1362: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1362: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaT stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaR stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1362: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1361: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaap stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x23e26e0->Object::Object { } Next token is token 'a' (0x23e26e0 'a') Shifting token 'a' (0x23e26e0 'a') Entering state 1 Stack now 0 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x23e26e0 'a') -> $$ = nterm item (0x23e26e0 'a') Entering state 10 Stack now 0 10 Reading a token 0x23e2708->Object::Object { 0x23e26e0 } Next token is token 'a' (0x23e2708 'a') Shifting token 'a' (0x23e2708 'a') Entering state 1 Stack now 0 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x23e2708 'a') -> $$ = nterm item (0x23e2708 'a') Entering state 10 Stack now 0 10 10 Reading a token 0x23e2730->Object::Object { 0x23e26e0, 0x23e2708 } Next token is token 'a' (0x23e2730 'a') Shifting token 'a' (0x23e2730 'a') Entering state 1 Stack now 0 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x23e2730 'a') -> $$ = nterm item (0x23e2730 'a') Entering state 10 Stack now 0 10 10 10 Reading a token 0x23e2758->Object::Object { 0x23e26e0, 0x23e2708, 0x23e2730 } Next token is token 'a' (0x23e2758 'a') Shifting token 'a' (0x23e2758 'a') Entering state 1 Stack now 0 10 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x23e2758 'a') -> $$ = nterm item (0x23e2758 'a') Entering state 10 Stack now 0 10 10 10 10 Reading a token 0x23e2780->Object::Object { 0x23e26e0, 0x23e2708, 0x23e2730, 0x23e2758 } Next token is token 'p' (0x23e2780 'p'Exception caught: cleaning lookahead and stack 0x23e2780->Object::~Object { 0x23e26e0, 0x23e2708, 0x23e2730, 0x23e2758, 0x23e2780 } 0x23e2758->Object::~Object { 0x23e26e0, 0x23e2708, 0x23e2730, 0x23e2758 } 0x23e2730->Object::~Object { 0x23e26e0, 0x23e2708, 0x23e2730 } 0x23e2708->Object::~Object { 0x23e26e0, 0x23e2708 } 0x23e26e0->Object::~Object { 0x23e26e0 } exception caught: printer end { } ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x23e26e0->Object::Object { } Next token is token 'a' (0x23e26e0 'a') Shifting token 'a' (0x23e26e0 'a') Entering state 1 Stack now 0 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x23e26e0 'a') -> $$ = nterm item (0x23e26e0 'a') Entering state 10 Stack now 0 10 Reading a token 0x23e2708->Object::Object { 0x23e26e0 } Next token is token 'a' (0x23e2708 'a') Shifting token 'a' (0x23e2708 'a') Entering state 1 Stack now 0 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x23e2708 'a') -> $$ = nterm item (0x23e2708 'a') Entering state 10 Stack now 0 10 10 Reading a token 0x23e2730->Object::Object { 0x23e26e0, 0x23e2708 } Next token is token 'a' (0x23e2730 'a') Shifting token 'a' (0x23e2730 'a') Entering state 1 Stack now 0 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x23e2730 'a') -> $$ = nterm item (0x23e2730 'a') Entering state 10 Stack now 0 10 10 10 Reading a token 0x23e2758->Object::Object { 0x23e26e0, 0x23e2708, 0x23e2730 } Next token is token 'a' (0x23e2758 'a') Shifting token 'a' (0x23e2758 'a') Entering state 1 Stack now 0 10 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x23e2758 'a') -> $$ = nterm item (0x23e2758 'a') Entering state 10 Stack now 0 10 10 10 10 Reading a token 0x23e2780->Object::Object { 0x23e26e0, 0x23e2708, 0x23e2730, 0x23e2758 } Next token is token 'p' (0x23e2780 'p'Exception caught: cleaning lookahead and stack 0x23e2780->Object::~Object { 0x23e26e0, 0x23e2708, 0x23e2730, 0x23e2758, 0x23e2780 } 0x23e2758->Object::~Object { 0x23e26e0, 0x23e2708, 0x23e2730, 0x23e2758 } 0x23e2730->Object::~Object { 0x23e26e0, 0x23e2708, 0x23e2730 } 0x23e2708->Object::~Object { 0x23e26e0, 0x23e2708 } 0x23e26e0->Object::~Object { 0x23e26e0 } exception caught: printer end { } ./c++.at:1361: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1361: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaT stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaR stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1361: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.cc:148: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, const _Tp&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:444:5: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 444 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In file included from /usr/include/c++/12/vector:64: In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at input.cc:1707:19: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::push_back(const value_type&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void yy::parser::stack::push(T&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&)' at input.cc:1707:19, inlined from 'virtual int yy::parser::parse()' at input.cc:2033:15: /usr/include/c++/12/bits/stl_vector.h:1287:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 1287 | _M_realloc_insert(end(), __x); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~ stdout: ./c++.at:1363: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaap stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbeabcc94->Object::Object { } 0xbeabcd20->Object::Object { 0xbeabcc94 } 0xbeabcc94->Object::~Object { 0xbeabcc94, 0xbeabcd20 } Next token is token 'a' (0xbeabcd20 'a') 0xbeabccc0->Object::Object { 0xbeabcd20 } 0xbeabcc68->Object::Object { 0xbeabccc0, 0xbeabcd20 } 0xbeabcc68->Object::~Object { 0xbeabcc68, 0xbeabccc0, 0xbeabcd20 } 0xbeabcd20->Object::~Object { 0xbeabccc0, 0xbeabcd20 } Shifting token 'a' (0xbeabccc0 'a') 0x153e0b0->Object::Object { 0xbeabccc0 } 0xbeabcc68->Object::Object { 0x153e0b0, 0xbeabccc0 } 0xbeabcc68->Object::~Object { 0x153e0b0, 0xbeabcc68, 0xbeabccc0 } 0xbeabccc0->Object::~Object { 0x153e0b0, 0xbeabccc0 } Entering state 1 Stack now 0 1 0xbeabcd30->Object::Object { 0x153e0b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x153e0b0 'a') -> $$ = nterm item (0xbeabcd30 'a') 0x153e0b0->Object::~Object { 0x153e0b0, 0xbeabcd30 } 0x153e0b0->Object::Object { 0xbeabcd30 } 0xbeabccfc->Object::Object { 0x153e0b0, 0xbeabcd30 } 0xbeabccfc->Object::~Object { 0x153e0b0, 0xbeabccfc, 0xbeabcd30 } 0xbeabcd30->Object::~Object { 0x153e0b0, 0xbeabcd30 } Entering state 10 Stack now 0 10 Reading a token 0xbeabcc94->Object::Object { 0x153e0b0 } 0xbeabcd20->Object::Object { 0x153e0b0, 0xbeabcc94 } 0xbeabcc94->Object::~Object { 0x153e0b0, 0xbeabcc94, 0xbeabcd20 } Next token is token 'a' (0xbeabcd20 'a') 0xbeabccc0->Object::Object { 0x153e0b0, 0xbeabcd20 } 0xbeabcc68->Object::Object { 0x153e0b0, 0xbeabccc0, 0xbeabcd20 } 0xbeabcc68->Object::~Object { 0x153e0b0, 0xbeabcc68, 0xbeabccc0, 0xbeabcd20 } 0xbeabcd20->Object::~Object { 0x153e0b0, 0xbeabccc0, 0xbeabcd20 } Shifting token 'a' (0xbeabccc0 'a') 0x153e0c0->Object::Object { 0x153e0b0, 0xbeabccc0 } 0xbeabcc68->Object::Object { 0x153e0b0, 0x153e0c0, 0xbeabccc0 } 0xbeabcc68->Object::~Object { 0x153e0b0, 0x153e0c0, 0xbeabcc68, 0xbeabccc0 } 0xbeabccc0->Object::~Object { 0x153e0b0, 0x153e0c0, 0xbeabccc0 } Entering state 1 Stack now 0 10 1 0xbeabcd30->Object::Object { 0x153e0b0, 0x153e0c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x153e0c0 'a') -> $$ = nterm item (0xbeabcd30 'a') 0x153e0c0->Object::~Object { 0x153e0b0, 0x153e0c0, 0xbeabcd30 } 0x153e0c0->Object::Object { 0x153e0b0, 0xbeabcd30 } 0xbeabccfc->Object::Object { 0x153e0b0, 0x153e0c0, 0xbeabcd30 } 0xbeabccfc->Object::~Object { 0x153e0b0, 0x153e0c0, 0xbeabccfc, 0xbeabcd30 } 0xbeabcd30->Object::~Object { 0x153e0b0, 0x153e0c0, 0xbeabcd30 } Entering state 10 Stack now 0 10 10 Reading a token 0xbeabcc94->Object::Object { 0x153e0b0, 0x153e0c0 } 0xbeabcd20->Object::Object { 0x153e0b0, 0x153e0c0, 0xbeabcc94 } 0xbeabcc94->Object::~Object { 0x153e0b0, 0x153e0c0, 0xbeabcc94, 0xbeabcd20 } Next token is token 'a' (0xbeabcd20 'a') 0xbeabccc0->Object::Object { 0x153e0b0, 0x153e0c0, 0xbeabcd20 } 0xbeabcc68->Object::Object { 0x153e0b0, 0x153e0c0, 0xbeabccc0, 0xbeabcd20 } 0xbeabcc68->Object::~Object { 0x153e0b0, 0x153e0c0, 0xbeabcc68, 0xbeabccc0, 0xbeabcd20 } 0xbeabcd20->Object::~Object { 0x153e0b0, 0x153e0c0, 0xbeabccc0, 0xbeabcd20 } Shifting token 'a' (0xbeabccc0 'a') 0x153e0d0->Object::Object { 0x153e0b0, 0x153e0c0, 0xbeabccc0 } 0xbeabcc68->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabccc0 } 0xbeabcc68->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcc68, 0xbeabccc0 } 0xbeabccc0->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabccc0 } Entering state 1 Stack now 0 10 10 1 0xbeabcd30->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x153e0d0 'a') -> $$ = nterm item (0xbeabcd30 'a') 0x153e0d0->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcd30 } 0x153e0d0->Object::Object { 0x153e0b0, 0x153e0c0, 0xbeabcd30 } 0xbeabccfc->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcd30 } 0xbeabccfc->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabccfc, 0xbeabcd30 } 0xbeabcd30->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcd30 } Entering state 10 Stack now 0 10 10 10 Reading a token 0xbeabcc94->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0 } 0xbeabcd20->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcc94 } 0xbeabcc94->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcc94, 0xbeabcd20 } Next token is token 'a' (0xbeabcd20 'a') 0xbeabccc0->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcd20 } 0xbeabcc68->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabccc0, 0xbeabcd20 } 0xbeabcc68->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcc68, 0xbeabccc0, 0xbeabcd20 } 0xbeabcd20->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabccc0, 0xbeabcd20 } Shifting token 'a' (0xbeabccc0 'a') 0x153e0e0->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabccc0 } 0xbeabcc68->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabccc0 } 0xbeabcc68->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabcc68, 0xbeabccc0 } 0xbeabccc0->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabccc0 } Entering state 1 Stack now 0 10 10 10 1 0xbeabcd30->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x153e0e0 'a') -> $$ = nterm item (0xbeabcd30 'a') 0x153e0e0->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabcd30 } 0x153e0e0->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcd30 } 0xbeabccfc->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabcd30 } 0xbeabccfc->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabccfc, 0xbeabcd30 } 0xbeabcd30->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabcd30 } Entering state 10 Stack now 0 10 10 10 10 Reading a token 0xbeabcc94->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0 } 0xbeabcd20->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabcc94 } 0xbeabcc94->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabcc94, 0xbeabcd20 } Next token is token 'p' (0xbeabcd20 'p'Exception caught: cleaning lookahead and stack 0x153e0e0->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabcd20 } 0x153e0d0->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcd20 } 0x153e0c0->Object::~Object { 0x153e0b0, 0x153e0c0, 0xbeabcd20 } 0x153e0b0->Object::~Object { 0x153e0b0, 0xbeabcd20 } 0xbeabcd20->Object::~Object { 0xbeabcd20 } exception caught: printer end { } ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbeabcc94->Object::Object { } 0xbeabcd20->Object::Object { 0xbeabcc94 } 0xbeabcc94->Object::~Object { 0xbeabcc94, 0xbeabcd20 } Next token is token 'a' (0xbeabcd20 'a') 0xbeabccc0->Object::Object { 0xbeabcd20 } 0xbeabcc68->Object::Object { 0xbeabccc0, 0xbeabcd20 } 0xbeabcc68->Object::~Object { 0xbeabcc68, 0xbeabccc0, 0xbeabcd20 } 0xbeabcd20->Object::~Object { 0xbeabccc0, 0xbeabcd20 } Shifting token 'a' (0xbeabccc0 'a') 0x153e0b0->Object::Object { 0xbeabccc0 } 0xbeabcc68->Object::Object { 0x153e0b0, 0xbeabccc0 } 0xbeabcc68->Object::~Object { 0x153e0b0, 0xbeabcc68, 0xbeabccc0 } 0xbeabccc0->Object::~Object { 0x153e0b0, 0xbeabccc0 } Entering state 1 Stack now 0 1 0xbeabcd30->Object::Object { 0x153e0b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x153e0b0 'a') -> $$ = nterm item (0xbeabcd30 'a') 0x153e0b0->Object::~Object { 0x153e0b0, 0xbeabcd30 } 0x153e0b0->Object::Object { 0xbeabcd30 } 0xbeabccfc->Object::Object { 0x153e0b0, 0xbeabcd30 } 0xbeabccfc->Object::~Object { 0x153e0b0, 0xbeabccfc, 0xbeabcd30 } 0xbeabcd30->Object::~Object { 0x153e0b0, 0xbeabcd30 } Entering state 10 Stack now 0 10 Reading a token 0xbeabcc94->Object::Object { 0x153e0b0 } 0xbeabcd20->Object::Object { 0x153e0b0, 0xbeabcc94 } 0xbeabcc94->Object::~Object { 0x153e0b0, 0xbeabcc94, 0xbeabcd20 } Next token is token 'a' (0xbeabcd20 'a') 0xbeabccc0->Object::Object { 0x153e0b0, 0xbeabcd20 } 0xbeabcc68->Object::Object { 0x153e0b0, 0xbeabccc0, 0xbeabcd20 } 0xbeabcc68->Object::~Object { 0x153e0b0, 0xbeabcc68, 0xbeabccc0, 0xbeabcd20 } 0xbeabcd20->Object::~Object { 0x153e0b0, 0xbeabccc0, 0xbeabcd20 } Shifting token 'a' (0xbeabccc0 'a') 0x153e0c0->Object::Object { 0x153e0b0, 0xbeabccc0 } 0xbeabcc68->Object::Object { 0x153e0b0, 0x153e0c0, 0xbeabccc0 } 0xbeabcc68->Object::~Object { 0x153e0b0, 0x153e0c0, 0xbeabcc68, 0xbeabccc0 } 0xbeabccc0->Object::~Object { 0x153e0b0, 0x153e0c0, 0xbeabccc0 } Entering state 1 Stack now 0 10 1 0xbeabcd30->Object::Object { 0x153e0b0, 0x153e0c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x153e0c0 'a') -> $$ = nterm item (0xbeabcd30 'a') 0x153e0c0->Object::~Object { 0x153e0b0, 0x153e0c0, 0xbeabcd30 } 0x153e0c0->Object::Object { 0x153e0b0, 0xbeabcd30 } 0xbeabccfc->Object::Object { 0x153e0b0, 0x153e0c0, 0xbeabcd30 } 0xbeabccfc->Object::~Object { 0x153e0b0, 0x153e0c0, 0xbeabccfc, 0xbeabcd30 } 0xbeabcd30->Object::~Object { 0x153e0b0, 0x153e0c0, 0xbeabcd30 } Entering state 10 Stack now 0 10 10 Reading a token 0xbeabcc94->Object::Object { 0x153e0b0, 0x153e0c0 } 0xbeabcd20->Object::Object { 0x153e0b0, 0x153e0c0, 0xbeabcc94 } 0xbeabcc94->Object::~Object { 0x153e0b0, 0x153e0c0, 0xbeabcc94, 0xbeabcd20 } Next token is token 'a' (0xbeabcd20 'a') 0xbeabccc0->Object::Object { 0x153e0b0, 0x153e0c0, 0xbeabcd20 } 0xbeabcc68->Object::Object { 0x153e0b0, 0x153e0c0, 0xbeabccc0, 0xbeabcd20 } 0xbeabcc68->Object::~Object { 0x153e0b0, 0x153e0c0, 0xbeabcc68, 0xbeabccc0, 0xbeabcd20 } 0xbeabcd20->Object::~Object { 0x153e0b0, 0x153e0c0, 0xbeabccc0, 0xbeabcd20 } Shifting token 'a' (0xbeabccc0 'a') 0x153e0d0->Object::Object { 0x153e0b0, 0x153e0c0, 0xbeabccc0 } 0xbeabcc68->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabccc0 } 0xbeabcc68->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcc68, 0xbeabccc0 } 0xbeabccc0->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabccc0 } Entering state 1 Stack now 0 10 10 1 0xbeabcd30->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x153e0d0 'a') -> $$ = nterm item (0xbeabcd30 'a') 0x153e0d0->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcd30 } 0x153e0d0->Object::Object { 0x153e0b0, 0x153e0c0, 0xbeabcd30 } 0xbeabccfc->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcd30 } 0xbeabccfc->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabccfc, 0xbeabcd30 } 0xbeabcd30->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcd30 } Entering state 10 Stack now 0 10 10 10 Reading a token 0xbeabcc94->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0 } 0xbeabcd20->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcc94 } 0xbeabcc94->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcc94, 0xbeabcd20 } Next token is token 'a' (0xbeabcd20 'a') 0xbeabccc0->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcd20 } 0xbeabcc68->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabccc0, 0xbeabcd20 } 0xbeabcc68->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcc68, 0xbeabccc0, 0xbeabcd20 } 0xbeabcd20->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabccc0, 0xbeabcd20 } Shifting token 'a' (0xbeabccc0 'a') 0x153e0e0->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabccc0 } 0xbeabcc68->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabccc0 } 0xbeabcc68->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabcc68, 0xbeabccc0 } 0xbeabccc0->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabccc0 } Entering state 1 Stack now 0 10 10 10 1 0xbeabcd30->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x153e0e0 'a') -> $$ = nterm item (0xbeabcd30 'a') 0x153e0e0->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabcd30 } 0x153e0e0->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcd30 } 0xbeabccfc->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabcd30 } 0xbeabccfc->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabccfc, 0xbeabcd30 } 0xbeabcd30->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabcd30 } Entering state 10 Stack now 0 10 10 10 10 Reading a token 0xbeabcc94->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0 } 0xbeabcd20->Object::Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabcc94 } 0xbeabcc94->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabcc94, 0xbeabcd20 } Next token is token 'p' (0xbeabcd20 'p'Exception caught: cleaning lookahead and stack 0x153e0e0->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0x153e0e0, 0xbeabcd20 } 0x153e0d0->Object::~Object { 0x153e0b0, 0x153e0c0, 0x153e0d0, 0xbeabcd20 } 0x153e0c0->Object::~Object { 0x153e0b0, 0x153e0c0, 0xbeabcd20 } 0x153e0b0->Object::~Object { 0x153e0b0, 0xbeabcd20 } 0xbeabcd20->Object::~Object { 0xbeabcd20 } exception caught: printer end { } ./c++.at:1363: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1363: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaT stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaR stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1363: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:63, from input.cc:148: /usr/include/c++/12/bits/stl_uninitialized.h: In function '_ForwardIterator std::__do_uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]': /usr/include/c++/12/bits/stl_uninitialized.h:113:5: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 113 | __do_uninit_copy(_InputIterator __first, _InputIterator __last, | ^~~~~~~~~~~~~~~~ /usr/include/c++/12/bits/stl_uninitialized.h:113:5: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 In file included from /usr/include/c++/12/vector:70: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In static member function 'static _ForwardIterator std::__uninitialized_copy<_TrivialValueTypes>::__uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = std::move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; bool _TrivialValueTypes = false]', inlined from '_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]' at /usr/include/c++/12/bits/stl_uninitialized.h:185:15, inlined from '_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; _Tp = yy::parser::stack_symbol_type]' at /usr/include/c++/12/bits/stl_uninitialized.h:372:37, inlined from '_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = yy::parser::stack_symbol_type*; _ForwardIterator = yy::parser::stack_symbol_type*; _Allocator = allocator]' at /usr/include/c++/12/bits/stl_uninitialized.h:397:2, inlined from 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/vector.tcc:487:3: /usr/include/c++/12/bits/stl_uninitialized.h:137:39: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 137 | { return std::__do_uninit_copy(__first, __last, __result); } | ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~ In static member function 'static _ForwardIterator std::__uninitialized_copy<_TrivialValueTypes>::__uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = std::move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; bool _TrivialValueTypes = false]', inlined from '_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]' at /usr/include/c++/12/bits/stl_uninitialized.h:185:15, inlined from '_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; _Tp = yy::parser::stack_symbol_type]' at /usr/include/c++/12/bits/stl_uninitialized.h:372:37, inlined from '_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = yy::parser::stack_symbol_type*; _ForwardIterator = yy::parser::stack_symbol_type*; _Allocator = allocator]' at /usr/include/c++/12/bits/stl_uninitialized.h:397:2, inlined from 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/vector.tcc:494:3: /usr/include/c++/12/bits/stl_uninitialized.h:137:39: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 137 | { return std::__do_uninit_copy(__first, __last, __result); } | ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19, inlined from 'virtual int yy::parser::parse()' at input.cc:2039:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:1362: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaap stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbebe7cc4->Object::Object { } 0xbebe7d48->Object::Object { 0xbebe7cc4 } 0xbebe7cc4->Object::~Object { 0xbebe7cc4, 0xbebe7d48 } Next token is token 'a' (0xbebe7d48 'a') 0xbebe7cc0->Object::Object { 0xbebe7d48 } 0xbebe7d48->Object::~Object { 0xbebe7cc0, 0xbebe7d48 } Shifting token 'a' (0xbebe7cc0 'a') 0x5b40b0->Object::Object { 0xbebe7cc0 } 0xbebe7cc0->Object::~Object { 0x5b40b0, 0xbebe7cc0 } Entering state 2 Stack now 0 2 0xbebe7d58->Object::Object { 0x5b40b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x5b40b0 'a') -> $$ = nterm item (0xbebe7d58 'a') 0x5b40b0->Object::~Object { 0x5b40b0, 0xbebe7d58 } 0x5b40b0->Object::Object { 0xbebe7d58 } 0xbebe7d58->Object::~Object { 0x5b40b0, 0xbebe7d58 } Entering state 11 Stack now 0 11 Reading a token 0xbebe7cc4->Object::Object { 0x5b40b0 } 0xbebe7d48->Object::Object { 0x5b40b0, 0xbebe7cc4 } 0xbebe7cc4->Object::~Object { 0x5b40b0, 0xbebe7cc4, 0xbebe7d48 } Next token is token 'a' (0xbebe7d48 'a') 0xbebe7cc0->Object::Object { 0x5b40b0, 0xbebe7d48 } 0xbebe7d48->Object::~Object { 0x5b40b0, 0xbebe7cc0, 0xbebe7d48 } Shifting token 'a' (0xbebe7cc0 'a') 0x5b40c0->Object::Object { 0x5b40b0, 0xbebe7cc0 } 0xbebe7cc0->Object::~Object { 0x5b40b0, 0x5b40c0, 0xbebe7cc0 } Entering state 2 Stack now 0 11 2 0xbebe7d58->Object::Object { 0x5b40b0, 0x5b40c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x5b40c0 'a') -> $$ = nterm item (0xbebe7d58 'a') 0x5b40c0->Object::~Object { 0x5b40b0, 0x5b40c0, 0xbebe7d58 } 0x5b40c0->Object::Object { 0x5b40b0, 0xbebe7d58 } 0xbebe7d58->Object::~Object { 0x5b40b0, 0x5b40c0, 0xbebe7d58 } Entering state 11 Stack now 0 11 11 Reading a token 0xbebe7cc4->Object::Object { 0x5b40b0, 0x5b40c0 } 0xbebe7d48->Object::Object { 0x5b40b0, 0x5b40c0, 0xbebe7cc4 } 0xbebe7cc4->Object::~Object { 0x5b40b0, 0x5b40c0, 0xbebe7cc4, 0xbebe7d48 } Next token is token 'a' (0xbebe7d48 'a') 0xbebe7cc0->Object::Object { 0x5b40b0, 0x5b40c0, 0xbebe7d48 } 0xbebe7d48->Object::~Object { 0x5b40b0, 0x5b40c0, 0xbebe7cc0, 0xbebe7d48 } Shifting token 'a' (0xbebe7cc0 'a') 0x5b40d0->Object::Object { 0x5b40b0, 0x5b40c0, 0xbebe7cc0 } 0xbebe7cc0->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7cc0 } Entering state 2 Stack now 0 11 11 2 0xbebe7d58->Object::Object { 0x5b40b0, 0x5b40c0, 0x5b40d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x5b40d0 'a') -> $$ = nterm item (0xbebe7d58 'a') 0x5b40d0->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7d58 } 0x5b40d0->Object::Object { 0x5b40b0, 0x5b40c0, 0xbebe7d58 } 0xbebe7d58->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7d58 } Entering state 11 Stack now 0 11 11 11 Reading a token 0xbebe7cc4->Object::Object { 0x5b40b0, 0x5b40c0, 0x5b40d0 } 0xbebe7d48->Object::Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7cc4 } 0xbebe7cc4->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7cc4, 0xbebe7d48 } Next token is token 'a' (0xbebe7d48 'a') 0xbebe7cc0->Object::Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7d48 } 0xbebe7d48->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7cc0, 0xbebe7d48 } Shifting token 'a' (0xbebe7cc0 'a') 0x5b40e0->Object::Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7cc0 } 0xbebe7cc0->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0x5b40e0, 0xbebe7cc0 } Entering state 2 Stack now 0 11 11 11 2 0xbebe7d58->Object::Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0x5b40e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x5b40e0 'a') -> $$ = nterm item (0xbebe7d58 'a') 0x5b40e0->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0x5b40e0, 0xbebe7d58 } 0x5b40e0->Object::Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7d58 } 0xbebe7d58->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0x5b40e0, 0xbebe7d58 } Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xbebe7cc4->Object::Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0x5b40e0 } 0xbebe7d48->Object::Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0x5b40e0, 0xbebe7cc4 } 0xbebe7cc4->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0x5b40e0, 0xbebe7cc4, 0xbebe7d48 } Next token is token 'p' (0xbebe7d48 'p'Exception caught: cleaning lookahead and stack 0x5b40e0->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0x5b40e0, 0xbebe7d48 } 0x5b40d0->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7d48 } 0x5b40c0->Object::~Object { 0x5b40b0, 0x5b40c0, 0xbebe7d48 } 0x5b40b0->Object::~Object { 0x5b40b0, 0xbebe7d48 } 0xbebe7d48->Object::~Object { 0xbebe7d48 } exception caught: printer end { } ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbebe7cc4->Object::Object { } 0xbebe7d48->Object::Object { 0xbebe7cc4 } 0xbebe7cc4->Object::~Object { 0xbebe7cc4, 0xbebe7d48 } Next token is token 'a' (0xbebe7d48 'a') 0xbebe7cc0->Object::Object { 0xbebe7d48 } 0xbebe7d48->Object::~Object { 0xbebe7cc0, 0xbebe7d48 } Shifting token 'a' (0xbebe7cc0 'a') 0x5b40b0->Object::Object { 0xbebe7cc0 } 0xbebe7cc0->Object::~Object { 0x5b40b0, 0xbebe7cc0 } Entering state 2 Stack now 0 2 0xbebe7d58->Object::Object { 0x5b40b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x5b40b0 'a') -> $$ = nterm item (0xbebe7d58 'a') 0x5b40b0->Object::~Object { 0x5b40b0, 0xbebe7d58 } 0x5b40b0->Object::Object { 0xbebe7d58 } 0xbebe7d58->Object::~Object { 0x5b40b0, 0xbebe7d58 } Entering state 11 Stack now 0 11 Reading a token 0xbebe7cc4->Object::Object { 0x5b40b0 } 0xbebe7d48->Object::Object { 0x5b40b0, 0xbebe7cc4 } 0xbebe7cc4->Object::~Object { 0x5b40b0, 0xbebe7cc4, 0xbebe7d48 } Next token is token 'a' (0xbebe7d48 'a') 0xbebe7cc0->Object::Object { 0x5b40b0, 0xbebe7d48 } 0xbebe7d48->Object::~Object { 0x5b40b0, 0xbebe7cc0, 0xbebe7d48 } Shifting token 'a' (0xbebe7cc0 'a') 0x5b40c0->Object::Object { 0x5b40b0, 0xbebe7cc0 } 0xbebe7cc0->Object::~Object { 0x5b40b0, 0x5b40c0, 0xbebe7cc0 } Entering state 2 Stack now 0 11 2 0xbebe7d58->Object::Object { 0x5b40b0, 0x5b40c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x5b40c0 'a') -> $$ = nterm item (0xbebe7d58 'a') 0x5b40c0->Object::~Object { 0x5b40b0, 0x5b40c0, 0xbebe7d58 } 0x5b40c0->Object::Object { 0x5b40b0, 0xbebe7d58 } 0xbebe7d58->Object::~Object { 0x5b40b0, 0x5b40c0, 0xbebe7d58 } Entering state 11 Stack now 0 11 11 Reading a token 0xbebe7cc4->Object::Object { 0x5b40b0, 0x5b40c0 } 0xbebe7d48->Object::Object { 0x5b40b0, 0x5b40c0, 0xbebe7cc4 } 0xbebe7cc4->Object::~Object { 0x5b40b0, 0x5b40c0, 0xbebe7cc4, 0xbebe7d48 } Next token is token 'a' (0xbebe7d48 'a') 0xbebe7cc0->Object::Object { 0x5b40b0, 0x5b40c0, 0xbebe7d48 } 0xbebe7d48->Object::~Object { 0x5b40b0, 0x5b40c0, 0xbebe7cc0, 0xbebe7d48 } Shifting token 'a' (0xbebe7cc0 'a') 0x5b40d0->Object::Object { 0x5b40b0, 0x5b40c0, 0xbebe7cc0 } 0xbebe7cc0->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7cc0 } Entering state 2 Stack now 0 11 11 2 0xbebe7d58->Object::Object { 0x5b40b0, 0x5b40c0, 0x5b40d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x5b40d0 'a') -> $$ = nterm item (0xbebe7d58 'a') 0x5b40d0->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7d58 } 0x5b40d0->Object::Object { 0x5b40b0, 0x5b40c0, 0xbebe7d58 } 0xbebe7d58->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7d58 } Entering state 11 Stack now 0 11 11 11 Reading a token 0xbebe7cc4->Object::Object { 0x5b40b0, 0x5b40c0, 0x5b40d0 } 0xbebe7d48->Object::Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7cc4 } 0xbebe7cc4->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7cc4, 0xbebe7d48 } Next token is token 'a' (0xbebe7d48 'a') 0xbebe7cc0->Object::Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7d48 } 0xbebe7d48->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7cc0, 0xbebe7d48 } Shifting token 'a' (0xbebe7cc0 'a') 0x5b40e0->Object::Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7cc0 } 0xbebe7cc0->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0x5b40e0, 0xbebe7cc0 } Entering state 2 Stack now 0 11 11 11 2 0xbebe7d58->Object::Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0x5b40e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x5b40e0 'a') -> $$ = nterm item (0xbebe7d58 'a') 0x5b40e0->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0x5b40e0, 0xbebe7d58 } 0x5b40e0->Object::Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7d58 } 0xbebe7d58->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0x5b40e0, 0xbebe7d58 } Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xbebe7cc4->Object::Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0x5b40e0 } 0xbebe7d48->Object::Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0x5b40e0, 0xbebe7cc4 } 0xbebe7cc4->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0x5b40e0, 0xbebe7cc4, 0xbebe7d48 } Next token is token 'p' (0xbebe7d48 'p'Exception caught: cleaning lookahead and stack 0x5b40e0->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0x5b40e0, 0xbebe7d48 } 0x5b40d0->Object::~Object { 0x5b40b0, 0x5b40c0, 0x5b40d0, 0xbebe7d48 } 0x5b40c0->Object::~Object { 0x5b40b0, 0x5b40c0, 0xbebe7d48 } 0x5b40b0->Object::~Object { 0x5b40b0, 0xbebe7d48 } 0xbebe7d48->Object::~Object { 0xbebe7d48 } exception caught: printer end { } ./c++.at:1362: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1362: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaT stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaR stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1362: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1361: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaap stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x1f816e0->Object::Object { } Next token is token 'a' (0x1f816e0 'a') Shifting token 'a' (0x1f816e0 'a') Entering state 1 Stack now 0 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1f816e0 'a') -> $$ = nterm item (0x1f816e0 'a') Entering state 10 Stack now 0 10 Reading a token 0x1f81708->Object::Object { 0x1f816e0 } Next token is token 'a' (0x1f81708 'a') Shifting token 'a' (0x1f81708 'a') Entering state 1 Stack now 0 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1f81708 'a') -> $$ = nterm item (0x1f81708 'a') Entering state 10 Stack now 0 10 10 Reading a token 0x1f81730->Object::Object { 0x1f816e0, 0x1f81708 } Next token is token 'a' (0x1f81730 'a') Shifting token 'a' (0x1f81730 'a') Entering state 1 Stack now 0 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1f81730 'a') -> $$ = nterm item (0x1f81730 'a') Entering state 10 Stack now 0 10 10 10 Reading a token 0x1f81758->Object::Object { 0x1f816e0, 0x1f81708, 0x1f81730 } Next token is token 'a' (0x1f81758 'a') Shifting token 'a' (0x1f81758 'a') Entering state 1 Stack now 0 10 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1f81758 'a') -> $$ = nterm item (0x1f81758 'a') Entering state 10 Stack now 0 10 10 10 10 Reading a token 0x1f81780->Object::Object { 0x1f816e0, 0x1f81708, 0x1f81730, 0x1f81758 } Next token is token 'p' (0x1f81780 'p'Exception caught: cleaning lookahead and stack 0x1f81780->Object::~Object { 0x1f816e0, 0x1f81708, 0x1f81730, 0x1f81758, 0x1f81780 } 0x1f81758->Object::~Object { 0x1f816e0, 0x1f81708, 0x1f81730, 0x1f81758 } 0x1f81730->Object::~Object { 0x1f816e0, 0x1f81708, 0x1f81730 } 0x1f81708->Object::~Object { 0x1f816e0, 0x1f81708 } 0x1f816e0->Object::~Object { 0x1f816e0 } exception caught: printer end { } ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x1f816e0->Object::Object { } Next token is token 'a' (0x1f816e0 'a') Shifting token 'a' (0x1f816e0 'a') Entering state 1 Stack now 0 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1f816e0 'a') -> $$ = nterm item (0x1f816e0 'a') Entering state 10 Stack now 0 10 Reading a token 0x1f81708->Object::Object { 0x1f816e0 } Next token is token 'a' (0x1f81708 'a') Shifting token 'a' (0x1f81708 'a') Entering state 1 Stack now 0 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1f81708 'a') -> $$ = nterm item (0x1f81708 'a') Entering state 10 Stack now 0 10 10 Reading a token 0x1f81730->Object::Object { 0x1f816e0, 0x1f81708 } Next token is token 'a' (0x1f81730 'a') Shifting token 'a' (0x1f81730 'a') Entering state 1 Stack now 0 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1f81730 'a') -> $$ = nterm item (0x1f81730 'a') Entering state 10 Stack now 0 10 10 10 Reading a token 0x1f81758->Object::Object { 0x1f816e0, 0x1f81708, 0x1f81730 } Next token is token 'a' (0x1f81758 'a') Shifting token 'a' (0x1f81758 'a') Entering state 1 Stack now 0 10 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1f81758 'a') -> $$ = nterm item (0x1f81758 'a') Entering state 10 Stack now 0 10 10 10 10 Reading a token 0x1f81780->Object::Object { 0x1f816e0, 0x1f81708, 0x1f81730, 0x1f81758 } Next token is token 'p' (0x1f81780 'p'Exception caught: cleaning lookahead and stack 0x1f81780->Object::~Object { 0x1f816e0, 0x1f81708, 0x1f81730, 0x1f81758, 0x1f81780 } 0x1f81758->Object::~Object { 0x1f816e0, 0x1f81708, 0x1f81730, 0x1f81758 } 0x1f81730->Object::~Object { 0x1f816e0, 0x1f81708, 0x1f81730 } 0x1f81708->Object::~Object { 0x1f816e0, 0x1f81708 } 0x1f816e0->Object::~Object { 0x1f816e0 } exception caught: printer end { } ./c++.at:1361: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1361: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaT stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaR stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1361: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:63, from input.cc:148: /usr/include/c++/12/bits/stl_uninitialized.h: In function '_ForwardIterator std::__do_uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]': /usr/include/c++/12/bits/stl_uninitialized.h:113:5: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 113 | __do_uninit_copy(_InputIterator __first, _InputIterator __last, | ^~~~~~~~~~~~~~~~ /usr/include/c++/12/bits/stl_uninitialized.h:113:5: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 In file included from /usr/include/c++/12/vector:70: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In static member function 'static _ForwardIterator std::__uninitialized_copy<_TrivialValueTypes>::__uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = std::move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; bool _TrivialValueTypes = false]', inlined from '_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]' at /usr/include/c++/12/bits/stl_uninitialized.h:185:15, inlined from '_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; _Tp = yy::parser::stack_symbol_type]' at /usr/include/c++/12/bits/stl_uninitialized.h:372:37, inlined from '_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = yy::parser::stack_symbol_type*; _ForwardIterator = yy::parser::stack_symbol_type*; _Allocator = allocator]' at /usr/include/c++/12/bits/stl_uninitialized.h:397:2, inlined from 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/vector.tcc:487:3: /usr/include/c++/12/bits/stl_uninitialized.h:137:39: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 137 | { return std::__do_uninit_copy(__first, __last, __result); } | ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~ In static member function 'static _ForwardIterator std::__uninitialized_copy<_TrivialValueTypes>::__uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = std::move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; bool _TrivialValueTypes = false]', inlined from '_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]' at /usr/include/c++/12/bits/stl_uninitialized.h:185:15, inlined from '_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; _Tp = yy::parser::stack_symbol_type]' at /usr/include/c++/12/bits/stl_uninitialized.h:372:37, inlined from '_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = yy::parser::stack_symbol_type*; _ForwardIterator = yy::parser::stack_symbol_type*; _Allocator = allocator]' at /usr/include/c++/12/bits/stl_uninitialized.h:397:2, inlined from 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/vector.tcc:494:3: /usr/include/c++/12/bits/stl_uninitialized.h:137:39: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 137 | { return std::__do_uninit_copy(__first, __last, __result); } | ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19, inlined from 'virtual int yy::parser::parse()' at input.cc:2033:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:1363: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaap stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbebbccc4->Object::Object { } 0xbebbcd48->Object::Object { 0xbebbccc4 } 0xbebbccc4->Object::~Object { 0xbebbccc4, 0xbebbcd48 } Next token is token 'a' (0xbebbcd48 'a') 0xbebbccc0->Object::Object { 0xbebbcd48 } 0xbebbcd48->Object::~Object { 0xbebbccc0, 0xbebbcd48 } Shifting token 'a' (0xbebbccc0 'a') 0x21090b0->Object::Object { 0xbebbccc0 } 0xbebbccc0->Object::~Object { 0x21090b0, 0xbebbccc0 } Entering state 1 Stack now 0 1 0xbebbcd58->Object::Object { 0x21090b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x21090b0 'a') -> $$ = nterm item (0xbebbcd58 'a') 0x21090b0->Object::~Object { 0x21090b0, 0xbebbcd58 } 0x21090b0->Object::Object { 0xbebbcd58 } 0xbebbcd58->Object::~Object { 0x21090b0, 0xbebbcd58 } Entering state 10 Stack now 0 10 Reading a token 0xbebbccc4->Object::Object { 0x21090b0 } 0xbebbcd48->Object::Object { 0x21090b0, 0xbebbccc4 } 0xbebbccc4->Object::~Object { 0x21090b0, 0xbebbccc4, 0xbebbcd48 } Next token is token 'a' (0xbebbcd48 'a') 0xbebbccc0->Object::Object { 0x21090b0, 0xbebbcd48 } 0xbebbcd48->Object::~Object { 0x21090b0, 0xbebbccc0, 0xbebbcd48 } Shifting token 'a' (0xbebbccc0 'a') 0x21090c0->Object::Object { 0x21090b0, 0xbebbccc0 } 0xbebbccc0->Object::~Object { 0x21090b0, 0x21090c0, 0xbebbccc0 } Entering state 1 Stack now 0 10 1 0xbebbcd58->Object::Object { 0x21090b0, 0x21090c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x21090c0 'a') -> $$ = nterm item (0xbebbcd58 'a') 0x21090c0->Object::~Object { 0x21090b0, 0x21090c0, 0xbebbcd58 } 0x21090c0->Object::Object { 0x21090b0, 0xbebbcd58 } 0xbebbcd58->Object::~Object { 0x21090b0, 0x21090c0, 0xbebbcd58 } Entering state 10 Stack now 0 10 10 Reading a token 0xbebbccc4->Object::Object { 0x21090b0, 0x21090c0 } 0xbebbcd48->Object::Object { 0x21090b0, 0x21090c0, 0xbebbccc4 } 0xbebbccc4->Object::~Object { 0x21090b0, 0x21090c0, 0xbebbccc4, 0xbebbcd48 } Next token is token 'a' (0xbebbcd48 'a') 0xbebbccc0->Object::Object { 0x21090b0, 0x21090c0, 0xbebbcd48 } 0xbebbcd48->Object::~Object { 0x21090b0, 0x21090c0, 0xbebbccc0, 0xbebbcd48 } Shifting token 'a' (0xbebbccc0 'a') 0x21090d0->Object::Object { 0x21090b0, 0x21090c0, 0xbebbccc0 } 0xbebbccc0->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbccc0 } Entering state 1 Stack now 0 10 10 1 0xbebbcd58->Object::Object { 0x21090b0, 0x21090c0, 0x21090d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x21090d0 'a') -> $$ = nterm item (0xbebbcd58 'a') 0x21090d0->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbcd58 } 0x21090d0->Object::Object { 0x21090b0, 0x21090c0, 0xbebbcd58 } 0xbebbcd58->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbcd58 } Entering state 10 Stack now 0 10 10 10 Reading a token 0xbebbccc4->Object::Object { 0x21090b0, 0x21090c0, 0x21090d0 } 0xbebbcd48->Object::Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbccc4 } 0xbebbccc4->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbccc4, 0xbebbcd48 } Next token is token 'a' (0xbebbcd48 'a') 0xbebbccc0->Object::Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbcd48 } 0xbebbcd48->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbccc0, 0xbebbcd48 } Shifting token 'a' (0xbebbccc0 'a') 0x21090e0->Object::Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbccc0 } 0xbebbccc0->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0x21090e0, 0xbebbccc0 } Entering state 1 Stack now 0 10 10 10 1 0xbebbcd58->Object::Object { 0x21090b0, 0x21090c0, 0x21090d0, 0x21090e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x21090e0 'a') -> $$ = nterm item (0xbebbcd58 'a') 0x21090e0->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0x21090e0, 0xbebbcd58 } 0x21090e0->Object::Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbcd58 } 0xbebbcd58->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0x21090e0, 0xbebbcd58 } Entering state 10 Stack now 0 10 10 10 10 Reading a token 0xbebbccc4->Object::Object { 0x21090b0, 0x21090c0, 0x21090d0, 0x21090e0 } 0xbebbcd48->Object::Object { 0x21090b0, 0x21090c0, 0x21090d0, 0x21090e0, 0xbebbccc4 } 0xbebbccc4->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0x21090e0, 0xbebbccc4, 0xbebbcd48 } Next token is token 'p' (0xbebbcd48 'p'Exception caught: cleaning lookahead and stack 0x21090e0->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0x21090e0, 0xbebbcd48 } 0x21090d0->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbcd48 } 0x21090c0->Object::~Object { 0x21090b0, 0x21090c0, 0xbebbcd48 } 0x21090b0->Object::~Object { 0x21090b0, 0xbebbcd48 } 0xbebbcd48->Object::~Object { 0xbebbcd48 } exception caught: printer end { } ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbebbccc4->Object::Object { } 0xbebbcd48->Object::Object { 0xbebbccc4 } 0xbebbccc4->Object::~Object { 0xbebbccc4, 0xbebbcd48 } Next token is token 'a' (0xbebbcd48 'a') 0xbebbccc0->Object::Object { 0xbebbcd48 } 0xbebbcd48->Object::~Object { 0xbebbccc0, 0xbebbcd48 } Shifting token 'a' (0xbebbccc0 'a') 0x21090b0->Object::Object { 0xbebbccc0 } 0xbebbccc0->Object::~Object { 0x21090b0, 0xbebbccc0 } Entering state 1 Stack now 0 1 0xbebbcd58->Object::Object { 0x21090b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x21090b0 'a') -> $$ = nterm item (0xbebbcd58 'a') 0x21090b0->Object::~Object { 0x21090b0, 0xbebbcd58 } 0x21090b0->Object::Object { 0xbebbcd58 } 0xbebbcd58->Object::~Object { 0x21090b0, 0xbebbcd58 } Entering state 10 Stack now 0 10 Reading a token 0xbebbccc4->Object::Object { 0x21090b0 } 0xbebbcd48->Object::Object { 0x21090b0, 0xbebbccc4 } 0xbebbccc4->Object::~Object { 0x21090b0, 0xbebbccc4, 0xbebbcd48 } Next token is token 'a' (0xbebbcd48 'a') 0xbebbccc0->Object::Object { 0x21090b0, 0xbebbcd48 } 0xbebbcd48->Object::~Object { 0x21090b0, 0xbebbccc0, 0xbebbcd48 } Shifting token 'a' (0xbebbccc0 'a') 0x21090c0->Object::Object { 0x21090b0, 0xbebbccc0 } 0xbebbccc0->Object::~Object { 0x21090b0, 0x21090c0, 0xbebbccc0 } Entering state 1 Stack now 0 10 1 0xbebbcd58->Object::Object { 0x21090b0, 0x21090c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x21090c0 'a') -> $$ = nterm item (0xbebbcd58 'a') 0x21090c0->Object::~Object { 0x21090b0, 0x21090c0, 0xbebbcd58 } 0x21090c0->Object::Object { 0x21090b0, 0xbebbcd58 } 0xbebbcd58->Object::~Object { 0x21090b0, 0x21090c0, 0xbebbcd58 } Entering state 10 Stack now 0 10 10 Reading a token 0xbebbccc4->Object::Object { 0x21090b0, 0x21090c0 } 0xbebbcd48->Object::Object { 0x21090b0, 0x21090c0, 0xbebbccc4 } 0xbebbccc4->Object::~Object { 0x21090b0, 0x21090c0, 0xbebbccc4, 0xbebbcd48 } Next token is token 'a' (0xbebbcd48 'a') 0xbebbccc0->Object::Object { 0x21090b0, 0x21090c0, 0xbebbcd48 } 0xbebbcd48->Object::~Object { 0x21090b0, 0x21090c0, 0xbebbccc0, 0xbebbcd48 } Shifting token 'a' (0xbebbccc0 'a') 0x21090d0->Object::Object { 0x21090b0, 0x21090c0, 0xbebbccc0 } 0xbebbccc0->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbccc0 } Entering state 1 Stack now 0 10 10 1 0xbebbcd58->Object::Object { 0x21090b0, 0x21090c0, 0x21090d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x21090d0 'a') -> $$ = nterm item (0xbebbcd58 'a') 0x21090d0->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbcd58 } 0x21090d0->Object::Object { 0x21090b0, 0x21090c0, 0xbebbcd58 } 0xbebbcd58->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbcd58 } Entering state 10 Stack now 0 10 10 10 Reading a token 0xbebbccc4->Object::Object { 0x21090b0, 0x21090c0, 0x21090d0 } 0xbebbcd48->Object::Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbccc4 } 0xbebbccc4->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbccc4, 0xbebbcd48 } Next token is token 'a' (0xbebbcd48 'a') 0xbebbccc0->Object::Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbcd48 } 0xbebbcd48->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbccc0, 0xbebbcd48 } Shifting token 'a' (0xbebbccc0 'a') 0x21090e0->Object::Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbccc0 } 0xbebbccc0->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0x21090e0, 0xbebbccc0 } Entering state 1 Stack now 0 10 10 10 1 0xbebbcd58->Object::Object { 0x21090b0, 0x21090c0, 0x21090d0, 0x21090e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x21090e0 'a') -> $$ = nterm item (0xbebbcd58 'a') 0x21090e0->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0x21090e0, 0xbebbcd58 } 0x21090e0->Object::Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbcd58 } 0xbebbcd58->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0x21090e0, 0xbebbcd58 } Entering state 10 Stack now 0 10 10 10 10 Reading a token 0xbebbccc4->Object::Object { 0x21090b0, 0x21090c0, 0x21090d0, 0x21090e0 } 0xbebbcd48->Object::Object { 0x21090b0, 0x21090c0, 0x21090d0, 0x21090e0, 0xbebbccc4 } 0xbebbccc4->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0x21090e0, 0xbebbccc4, 0xbebbcd48 } Next token is token 'p' (0xbebbcd48 'p'Exception caught: cleaning lookahead and stack 0x21090e0->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0x21090e0, 0xbebbcd48 } 0x21090d0->Object::~Object { 0x21090b0, 0x21090c0, 0x21090d0, 0xbebbcd48 } 0x21090c0->Object::~Object { 0x21090b0, 0x21090c0, 0xbebbcd48 } 0x21090b0->Object::~Object { 0x21090b0, 0xbebbcd48 } 0xbebbcd48->Object::~Object { 0xbebbcd48 } exception caught: printer end { } ./c++.at:1363: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1363: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaT stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaR stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1363: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:63, from input.cc:148: /usr/include/c++/12/bits/stl_uninitialized.h: In function '_ForwardIterator std::__do_uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]': /usr/include/c++/12/bits/stl_uninitialized.h:113:5: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 113 | __do_uninit_copy(_InputIterator __first, _InputIterator __last, | ^~~~~~~~~~~~~~~~ /usr/include/c++/12/bits/stl_uninitialized.h:113:5: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 In file included from /usr/include/c++/12/vector:70: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In static member function 'static _ForwardIterator std::__uninitialized_copy<_TrivialValueTypes>::__uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = std::move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; bool _TrivialValueTypes = false]', inlined from '_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]' at /usr/include/c++/12/bits/stl_uninitialized.h:185:15, inlined from '_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; _Tp = yy::parser::stack_symbol_type]' at /usr/include/c++/12/bits/stl_uninitialized.h:372:37, inlined from '_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = yy::parser::stack_symbol_type*; _ForwardIterator = yy::parser::stack_symbol_type*; _Allocator = allocator]' at /usr/include/c++/12/bits/stl_uninitialized.h:397:2, inlined from 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/vector.tcc:487:3: /usr/include/c++/12/bits/stl_uninitialized.h:137:39: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 137 | { return std::__do_uninit_copy(__first, __last, __result); } | ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~ In static member function 'static _ForwardIterator std::__uninitialized_copy<_TrivialValueTypes>::__uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = std::move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; bool _TrivialValueTypes = false]', inlined from '_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]' at /usr/include/c++/12/bits/stl_uninitialized.h:185:15, inlined from '_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; _Tp = yy::parser::stack_symbol_type]' at /usr/include/c++/12/bits/stl_uninitialized.h:372:37, inlined from '_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = yy::parser::stack_symbol_type*; _ForwardIterator = yy::parser::stack_symbol_type*; _Allocator = allocator]' at /usr/include/c++/12/bits/stl_uninitialized.h:397:2, inlined from 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/vector.tcc:494:3: /usr/include/c++/12/bits/stl_uninitialized.h:137:39: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 137 | { return std::__do_uninit_copy(__first, __last, __result); } | ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19, inlined from 'virtual int yy::parser::parse()' at input.cc:2039:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:1362: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaap stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbeae3cbc->Object::Object { } 0xbeae3d48->Object::Object { 0xbeae3cbc } 0xbeae3cbc->Object::~Object { 0xbeae3cbc, 0xbeae3d48 } Next token is token 'a' (0xbeae3d48 'a') 0xbeae3cb8->Object::Object { 0xbeae3d48 } 0xbeae3d48->Object::~Object { 0xbeae3cb8, 0xbeae3d48 } Shifting token 'a' (0xbeae3cb8 'a') 0x1e4d0b0->Object::Object { 0xbeae3cb8 } 0xbeae3cb8->Object::~Object { 0x1e4d0b0, 0xbeae3cb8 } Entering state 2 Stack now 0 2 0xbeae3d58->Object::Object { 0x1e4d0b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1e4d0b0 'a') -> $$ = nterm item (0xbeae3d58 'a') 0x1e4d0b0->Object::~Object { 0x1e4d0b0, 0xbeae3d58 } 0x1e4d0b0->Object::Object { 0xbeae3d58 } 0xbeae3d58->Object::~Object { 0x1e4d0b0, 0xbeae3d58 } Entering state 11 Stack now 0 11 Reading a token 0xbeae3cbc->Object::Object { 0x1e4d0b0 } 0xbeae3d48->Object::Object { 0x1e4d0b0, 0xbeae3cbc } 0xbeae3cbc->Object::~Object { 0x1e4d0b0, 0xbeae3cbc, 0xbeae3d48 } Next token is token 'a' (0xbeae3d48 'a') 0xbeae3cb8->Object::Object { 0x1e4d0b0, 0xbeae3d48 } 0xbeae3d48->Object::~Object { 0x1e4d0b0, 0xbeae3cb8, 0xbeae3d48 } Shifting token 'a' (0xbeae3cb8 'a') 0x1e4d0c0->Object::Object { 0x1e4d0b0, 0xbeae3cb8 } 0xbeae3cb8->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3cb8 } Entering state 2 Stack now 0 11 2 0xbeae3d58->Object::Object { 0x1e4d0b0, 0x1e4d0c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1e4d0c0 'a') -> $$ = nterm item (0xbeae3d58 'a') 0x1e4d0c0->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3d58 } 0x1e4d0c0->Object::Object { 0x1e4d0b0, 0xbeae3d58 } 0xbeae3d58->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3d58 } Entering state 11 Stack now 0 11 11 Reading a token 0xbeae3cbc->Object::Object { 0x1e4d0b0, 0x1e4d0c0 } 0xbeae3d48->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3cbc } 0xbeae3cbc->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3cbc, 0xbeae3d48 } Next token is token 'a' (0xbeae3d48 'a') 0xbeae3cb8->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3d48 } 0xbeae3d48->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3cb8, 0xbeae3d48 } Shifting token 'a' (0xbeae3cb8 'a') 0x1e4d0d0->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3cb8 } 0xbeae3cb8->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3cb8 } Entering state 2 Stack now 0 11 11 2 0xbeae3d58->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1e4d0d0 'a') -> $$ = nterm item (0xbeae3d58 'a') 0x1e4d0d0->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3d58 } 0x1e4d0d0->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3d58 } 0xbeae3d58->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3d58 } Entering state 11 Stack now 0 11 11 11 Reading a token 0xbeae3cbc->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0 } 0xbeae3d48->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3cbc } 0xbeae3cbc->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3cbc, 0xbeae3d48 } Next token is token 'a' (0xbeae3d48 'a') 0xbeae3cb8->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3d48 } 0xbeae3d48->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3cb8, 0xbeae3d48 } Shifting token 'a' (0xbeae3cb8 'a') 0x1e4d0e0->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3cb8 } 0xbeae3cb8->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0x1e4d0e0, 0xbeae3cb8 } Entering state 2 Stack now 0 11 11 11 2 0xbeae3d58->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0x1e4d0e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1e4d0e0 'a') -> $$ = nterm item (0xbeae3d58 'a') 0x1e4d0e0->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0x1e4d0e0, 0xbeae3d58 } 0x1e4d0e0->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3d58 } 0xbeae3d58->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0x1e4d0e0, 0xbeae3d58 } Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xbeae3cbc->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0x1e4d0e0 } 0xbeae3d48->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0x1e4d0e0, 0xbeae3cbc } 0xbeae3cbc->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0x1e4d0e0, 0xbeae3cbc, 0xbeae3d48 } Next token is token 'p' (0xbeae3d48 'p'Exception caught: cleaning lookahead and stack 0x1e4d0e0->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0x1e4d0e0, 0xbeae3d48 } 0x1e4d0d0->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3d48 } 0x1e4d0c0->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3d48 } 0x1e4d0b0->Object::~Object { 0x1e4d0b0, 0xbeae3d48 } 0xbeae3d48->Object::~Object { 0xbeae3d48 } exception caught: printer end { } ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbeae3cbc->Object::Object { } 0xbeae3d48->Object::Object { 0xbeae3cbc } 0xbeae3cbc->Object::~Object { 0xbeae3cbc, 0xbeae3d48 } Next token is token 'a' (0xbeae3d48 'a') 0xbeae3cb8->Object::Object { 0xbeae3d48 } 0xbeae3d48->Object::~Object { 0xbeae3cb8, 0xbeae3d48 } Shifting token 'a' (0xbeae3cb8 'a') 0x1e4d0b0->Object::Object { 0xbeae3cb8 } 0xbeae3cb8->Object::~Object { 0x1e4d0b0, 0xbeae3cb8 } Entering state 2 Stack now 0 2 0xbeae3d58->Object::Object { 0x1e4d0b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1e4d0b0 'a') -> $$ = nterm item (0xbeae3d58 'a') 0x1e4d0b0->Object::~Object { 0x1e4d0b0, 0xbeae3d58 } 0x1e4d0b0->Object::Object { 0xbeae3d58 } 0xbeae3d58->Object::~Object { 0x1e4d0b0, 0xbeae3d58 } Entering state 11 Stack now 0 11 Reading a token 0xbeae3cbc->Object::Object { 0x1e4d0b0 } 0xbeae3d48->Object::Object { 0x1e4d0b0, 0xbeae3cbc } 0xbeae3cbc->Object::~Object { 0x1e4d0b0, 0xbeae3cbc, 0xbeae3d48 } Next token is token 'a' (0xbeae3d48 'a') 0xbeae3cb8->Object::Object { 0x1e4d0b0, 0xbeae3d48 } 0xbeae3d48->Object::~Object { 0x1e4d0b0, 0xbeae3cb8, 0xbeae3d48 } Shifting token 'a' (0xbeae3cb8 'a') 0x1e4d0c0->Object::Object { 0x1e4d0b0, 0xbeae3cb8 } 0xbeae3cb8->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3cb8 } Entering state 2 Stack now 0 11 2 0xbeae3d58->Object::Object { 0x1e4d0b0, 0x1e4d0c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1e4d0c0 'a') -> $$ = nterm item (0xbeae3d58 'a') 0x1e4d0c0->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3d58 } 0x1e4d0c0->Object::Object { 0x1e4d0b0, 0xbeae3d58 } 0xbeae3d58->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3d58 } Entering state 11 Stack now 0 11 11 Reading a token 0xbeae3cbc->Object::Object { 0x1e4d0b0, 0x1e4d0c0 } 0xbeae3d48->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3cbc } 0xbeae3cbc->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3cbc, 0xbeae3d48 } Next token is token 'a' (0xbeae3d48 'a') 0xbeae3cb8->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3d48 } 0xbeae3d48->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3cb8, 0xbeae3d48 } Shifting token 'a' (0xbeae3cb8 'a') 0x1e4d0d0->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3cb8 } 0xbeae3cb8->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3cb8 } Entering state 2 Stack now 0 11 11 2 0xbeae3d58->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1e4d0d0 'a') -> $$ = nterm item (0xbeae3d58 'a') 0x1e4d0d0->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3d58 } 0x1e4d0d0->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3d58 } 0xbeae3d58->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3d58 } Entering state 11 Stack now 0 11 11 11 Reading a token 0xbeae3cbc->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0 } 0xbeae3d48->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3cbc } 0xbeae3cbc->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3cbc, 0xbeae3d48 } Next token is token 'a' (0xbeae3d48 'a') 0xbeae3cb8->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3d48 } 0xbeae3d48->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3cb8, 0xbeae3d48 } Shifting token 'a' (0xbeae3cb8 'a') 0x1e4d0e0->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3cb8 } 0xbeae3cb8->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0x1e4d0e0, 0xbeae3cb8 } Entering state 2 Stack now 0 11 11 11 2 0xbeae3d58->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0x1e4d0e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x1e4d0e0 'a') -> $$ = nterm item (0xbeae3d58 'a') 0x1e4d0e0->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0x1e4d0e0, 0xbeae3d58 } 0x1e4d0e0->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3d58 } 0xbeae3d58->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0x1e4d0e0, 0xbeae3d58 } Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xbeae3cbc->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0x1e4d0e0 } 0xbeae3d48->Object::Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0x1e4d0e0, 0xbeae3cbc } 0xbeae3cbc->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0x1e4d0e0, 0xbeae3cbc, 0xbeae3d48 } Next token is token 'p' (0xbeae3d48 'p'Exception caught: cleaning lookahead and stack 0x1e4d0e0->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0x1e4d0e0, 0xbeae3d48 } 0x1e4d0d0->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0x1e4d0d0, 0xbeae3d48 } 0x1e4d0c0->Object::~Object { 0x1e4d0b0, 0x1e4d0c0, 0xbeae3d48 } 0x1e4d0b0->Object::~Object { 0x1e4d0b0, 0xbeae3d48 } 0xbeae3d48->Object::~Object { 0xbeae3d48 } exception caught: printer end { } ./c++.at:1362: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1362: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaT stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaR stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1362: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./c++.at:1361: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaap stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x1d626e0->Object::Object { } Next token is token 'a' (0x1d626e0 'a') Shifting token 'a' (0x1d626e0 'a') Entering state 1 Stack now 0 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1d626e0 'a') -> $$ = nterm item (0x1d626e0 'a') Entering state 10 Stack now 0 10 Reading a token 0x1d62708->Object::Object { 0x1d626e0 } Next token is token 'a' (0x1d62708 'a') Shifting token 'a' (0x1d62708 'a') Entering state 1 Stack now 0 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1d62708 'a') -> $$ = nterm item (0x1d62708 'a') Entering state 10 Stack now 0 10 10 Reading a token 0x1d62730->Object::Object { 0x1d626e0, 0x1d62708 } Next token is token 'a' (0x1d62730 'a') Shifting token 'a' (0x1d62730 'a') Entering state 1 Stack now 0 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1d62730 'a') -> $$ = nterm item (0x1d62730 'a') Entering state 10 Stack now 0 10 10 10 Reading a token 0x1d62758->Object::Object { 0x1d626e0, 0x1d62708, 0x1d62730 } Next token is token 'a' (0x1d62758 'a') Shifting token 'a' (0x1d62758 'a') Entering state 1 Stack now 0 10 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1d62758 'a') -> $$ = nterm item (0x1d62758 'a') Entering state 10 Stack now 0 10 10 10 10 Reading a token 0x1d62780->Object::Object { 0x1d626e0, 0x1d62708, 0x1d62730, 0x1d62758 } Next token is token 'p' (0x1d62780 'p'Exception caught: cleaning lookahead and stack 0x1d62780->Object::~Object { 0x1d626e0, 0x1d62708, 0x1d62730, 0x1d62758, 0x1d62780 } 0x1d62758->Object::~Object { 0x1d626e0, 0x1d62708, 0x1d62730, 0x1d62758 } 0x1d62730->Object::~Object { 0x1d626e0, 0x1d62708, 0x1d62730 } 0x1d62708->Object::~Object { 0x1d626e0, 0x1d62708 } 0x1d626e0->Object::~Object { 0x1d626e0 } exception caught: printer end { } ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0x1d626e0->Object::Object { } Next token is token 'a' (0x1d626e0 'a') Shifting token 'a' (0x1d626e0 'a') Entering state 1 Stack now 0 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1d626e0 'a') -> $$ = nterm item (0x1d626e0 'a') Entering state 10 Stack now 0 10 Reading a token 0x1d62708->Object::Object { 0x1d626e0 } Next token is token 'a' (0x1d62708 'a') Shifting token 'a' (0x1d62708 'a') Entering state 1 Stack now 0 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1d62708 'a') -> $$ = nterm item (0x1d62708 'a') Entering state 10 Stack now 0 10 10 Reading a token 0x1d62730->Object::Object { 0x1d626e0, 0x1d62708 } Next token is token 'a' (0x1d62730 'a') Shifting token 'a' (0x1d62730 'a') Entering state 1 Stack now 0 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1d62730 'a') -> $$ = nterm item (0x1d62730 'a') Entering state 10 Stack now 0 10 10 10 Reading a token 0x1d62758->Object::Object { 0x1d626e0, 0x1d62708, 0x1d62730 } Next token is token 'a' (0x1d62758 'a') Shifting token 'a' (0x1d62758 'a') Entering state 1 Stack now 0 10 10 10 1 Reducing stack by rule 4 (line 147): $1 = token 'a' (0x1d62758 'a') -> $$ = nterm item (0x1d62758 'a') Entering state 10 Stack now 0 10 10 10 10 Reading a token 0x1d62780->Object::Object { 0x1d626e0, 0x1d62708, 0x1d62730, 0x1d62758 } Next token is token 'p' (0x1d62780 'p'Exception caught: cleaning lookahead and stack 0x1d62780->Object::~Object { 0x1d626e0, 0x1d62708, 0x1d62730, 0x1d62758, 0x1d62780 } 0x1d62758->Object::~Object { 0x1d626e0, 0x1d62708, 0x1d62730, 0x1d62758 } 0x1d62730->Object::~Object { 0x1d626e0, 0x1d62708, 0x1d62730 } 0x1d62708->Object::~Object { 0x1d626e0, 0x1d62708 } 0x1d626e0->Object::~Object { 0x1d626e0 } exception caught: printer end { } ./c++.at:1361: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1361: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaT stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1361: $PREPARSER ./input aaaaR stderr: ./c++.at:1361: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 686. c++.at:1361: ok 689. c++.at:1371: testing C++ GLR parser identifier shadowing ... ./c++.at:1410: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.yy stderr: In file included from /usr/include/c++/12/vector:63, from input.cc:148: /usr/include/c++/12/bits/stl_uninitialized.h: In function '_ForwardIterator std::__do_uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]': /usr/include/c++/12/bits/stl_uninitialized.h:113:5: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 113 | __do_uninit_copy(_InputIterator __first, _InputIterator __last, | ^~~~~~~~~~~~~~~~ /usr/include/c++/12/bits/stl_uninitialized.h:113:5: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 In file included from /usr/include/c++/12/vector:70: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In static member function 'static _ForwardIterator std::__uninitialized_copy<_TrivialValueTypes>::__uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = std::move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; bool _TrivialValueTypes = false]', inlined from '_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]' at /usr/include/c++/12/bits/stl_uninitialized.h:185:15, inlined from '_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; _Tp = yy::parser::stack_symbol_type]' at /usr/include/c++/12/bits/stl_uninitialized.h:372:37, inlined from '_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = yy::parser::stack_symbol_type*; _ForwardIterator = yy::parser::stack_symbol_type*; _Allocator = allocator]' at /usr/include/c++/12/bits/stl_uninitialized.h:397:2, inlined from 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/vector.tcc:487:3: /usr/include/c++/12/bits/stl_uninitialized.h:137:39: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 137 | { return std::__do_uninit_copy(__first, __last, __result); } | ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~ In static member function 'static _ForwardIterator std::__uninitialized_copy<_TrivialValueTypes>::__uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = std::move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; bool _TrivialValueTypes = false]', inlined from '_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]' at /usr/include/c++/12/bits/stl_uninitialized.h:185:15, inlined from '_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; _Tp = yy::parser::stack_symbol_type]' at /usr/include/c++/12/bits/stl_uninitialized.h:372:37, inlined from '_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = yy::parser::stack_symbol_type*; _ForwardIterator = yy::parser::stack_symbol_type*; _Allocator = allocator]' at /usr/include/c++/12/bits/stl_uninitialized.h:397:2, inlined from 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/vector.tcc:494:3: /usr/include/c++/12/bits/stl_uninitialized.h:137:39: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 137 | { return std::__do_uninit_copy(__first, __last, __result); } | ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19, inlined from 'virtual int yy::parser::parse()' at input.cc:2033:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:1363: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input i ======== Testing with C++ standard flags: '' stderr: ./c++.at:1411: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS exception caught: initial-action ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaap stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbec42cc4->Object::Object { } 0xbec42d48->Object::Object { 0xbec42cc4 } 0xbec42cc4->Object::~Object { 0xbec42cc4, 0xbec42d48 } Next token is token 'a' (0xbec42d48 'a') 0xbec42cc0->Object::Object { 0xbec42d48 } 0xbec42d48->Object::~Object { 0xbec42cc0, 0xbec42d48 } Shifting token 'a' (0xbec42cc0 'a') 0x11890b0->Object::Object { 0xbec42cc0 } 0xbec42cc0->Object::~Object { 0x11890b0, 0xbec42cc0 } Entering state 1 Stack now 0 1 0xbec42d58->Object::Object { 0x11890b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x11890b0 'a') -> $$ = nterm item (0xbec42d58 'a') 0x11890b0->Object::~Object { 0x11890b0, 0xbec42d58 } 0x11890b0->Object::Object { 0xbec42d58 } 0xbec42d58->Object::~Object { 0x11890b0, 0xbec42d58 } Entering state 10 Stack now 0 10 Reading a token 0xbec42cc4->Object::Object { 0x11890b0 } 0xbec42d48->Object::Object { 0x11890b0, 0xbec42cc4 } 0xbec42cc4->Object::~Object { 0x11890b0, 0xbec42cc4, 0xbec42d48 } Next token is token 'a' (0xbec42d48 'a') 0xbec42cc0->Object::Object { 0x11890b0, 0xbec42d48 } 0xbec42d48->Object::~Object { 0x11890b0, 0xbec42cc0, 0xbec42d48 } Shifting token 'a' (0xbec42cc0 'a') 0x11890c0->Object::Object { 0x11890b0, 0xbec42cc0 } 0xbec42cc0->Object::~Object { 0x11890b0, 0x11890c0, 0xbec42cc0 } Entering state 1 Stack now 0 10 1 0xbec42d58->Object::Object { 0x11890b0, 0x11890c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x11890c0 'a') -> $$ = nterm item (0xbec42d58 'a') 0x11890c0->Object::~Object { 0x11890b0, 0x11890c0, 0xbec42d58 } 0x11890c0->Object::Object { 0x11890b0, 0xbec42d58 } 0xbec42d58->Object::~Object { 0x11890b0, 0x11890c0, 0xbec42d58 } Entering state 10 Stack now 0 10 10 Reading a token 0xbec42cc4->Object::Object { 0x11890b0, 0x11890c0 } 0xbec42d48->Object::Object { 0x11890b0, 0x11890c0, 0xbec42cc4 } 0xbec42cc4->Object::~Object { 0x11890b0, 0x11890c0, 0xbec42cc4, 0xbec42d48 } Next token is token 'a' (0xbec42d48 'a') 0xbec42cc0->Object::Object { 0x11890b0, 0x11890c0, 0xbec42d48 } 0xbec42d48->Object::~Object { 0x11890b0, 0x11890c0, 0xbec42cc0, 0xbec42d48 } Shifting token 'a' (0xbec42cc0 'a') 0x11890d0->Object::Object { 0x11890b0, 0x11890c0, 0xbec42cc0 } 0xbec42cc0->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42cc0 } Entering state 1 Stack now 0 10 10 1 0xbec42d58->Object::Object { 0x11890b0, 0x11890c0, 0x11890d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x11890d0 'a') -> $$ = nterm item (0xbec42d58 'a') 0x11890d0->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42d58 } 0x11890d0->Object::Object { 0x11890b0, 0x11890c0, 0xbec42d58 } 0xbec42d58->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42d58 } Entering state 10 Stack now 0 10 10 10 Reading a token 0xbec42cc4->Object::Object { 0x11890b0, 0x11890c0, 0x11890d0 } 0xbec42d48->Object::Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42cc4 } 0xbec42cc4->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42cc4, 0xbec42d48 } Next token is token 'a' (0xbec42d48 'a') 0xbec42cc0->Object::Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42d48 } 0xbec42d48->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42cc0, 0xbec42d48 } Shifting token 'a' (0xbec42cc0 'a') 0x11890e0->Object::Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42cc0 } 0xbec42cc0->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0x11890e0, 0xbec42cc0 } Entering state 1 Stack now 0 10 10 10 1 0xbec42d58->Object::Object { 0x11890b0, 0x11890c0, 0x11890d0, 0x11890e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x11890e0 'a') -> $$ = nterm item (0xbec42d58 'a') 0x11890e0->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0x11890e0, 0xbec42d58 } 0x11890e0->Object::Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42d58 } 0xbec42d58->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0x11890e0, 0xbec42d58 } Entering state 10 Stack now 0 10 10 10 10 Reading a token 0xbec42cc4->Object::Object { 0x11890b0, 0x11890c0, 0x11890d0, 0x11890e0 } 0xbec42d48->Object::Object { 0x11890b0, 0x11890c0, 0x11890d0, 0x11890e0, 0xbec42cc4 } 0xbec42cc4->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0x11890e0, 0xbec42cc4, 0xbec42d48 } Next token is token 'p' (0xbec42d48 'p'Exception caught: cleaning lookahead and stack 0x11890e0->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0x11890e0, 0xbec42d48 } 0x11890d0->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42d48 } 0x11890c0->Object::~Object { 0x11890b0, 0x11890c0, 0xbec42d48 } 0x11890b0->Object::~Object { 0x11890b0, 0xbec42d48 } 0xbec42d48->Object::~Object { 0xbec42d48 } exception caught: printer end { } ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbec42cc4->Object::Object { } 0xbec42d48->Object::Object { 0xbec42cc4 } 0xbec42cc4->Object::~Object { 0xbec42cc4, 0xbec42d48 } Next token is token 'a' (0xbec42d48 'a') 0xbec42cc0->Object::Object { 0xbec42d48 } 0xbec42d48->Object::~Object { 0xbec42cc0, 0xbec42d48 } Shifting token 'a' (0xbec42cc0 'a') 0x11890b0->Object::Object { 0xbec42cc0 } 0xbec42cc0->Object::~Object { 0x11890b0, 0xbec42cc0 } Entering state 1 Stack now 0 1 0xbec42d58->Object::Object { 0x11890b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x11890b0 'a') -> $$ = nterm item (0xbec42d58 'a') 0x11890b0->Object::~Object { 0x11890b0, 0xbec42d58 } 0x11890b0->Object::Object { 0xbec42d58 } 0xbec42d58->Object::~Object { 0x11890b0, 0xbec42d58 } Entering state 10 Stack now 0 10 Reading a token 0xbec42cc4->Object::Object { 0x11890b0 } 0xbec42d48->Object::Object { 0x11890b0, 0xbec42cc4 } 0xbec42cc4->Object::~Object { 0x11890b0, 0xbec42cc4, 0xbec42d48 } Next token is token 'a' (0xbec42d48 'a') 0xbec42cc0->Object::Object { 0x11890b0, 0xbec42d48 } 0xbec42d48->Object::~Object { 0x11890b0, 0xbec42cc0, 0xbec42d48 } Shifting token 'a' (0xbec42cc0 'a') 0x11890c0->Object::Object { 0x11890b0, 0xbec42cc0 } 0xbec42cc0->Object::~Object { 0x11890b0, 0x11890c0, 0xbec42cc0 } Entering state 1 Stack now 0 10 1 0xbec42d58->Object::Object { 0x11890b0, 0x11890c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x11890c0 'a') -> $$ = nterm item (0xbec42d58 'a') 0x11890c0->Object::~Object { 0x11890b0, 0x11890c0, 0xbec42d58 } 0x11890c0->Object::Object { 0x11890b0, 0xbec42d58 } 0xbec42d58->Object::~Object { 0x11890b0, 0x11890c0, 0xbec42d58 } Entering state 10 Stack now 0 10 10 Reading a token 0xbec42cc4->Object::Object { 0x11890b0, 0x11890c0 } 0xbec42d48->Object::Object { 0x11890b0, 0x11890c0, 0xbec42cc4 } 0xbec42cc4->Object::~Object { 0x11890b0, 0x11890c0, 0xbec42cc4, 0xbec42d48 } Next token is token 'a' (0xbec42d48 'a') 0xbec42cc0->Object::Object { 0x11890b0, 0x11890c0, 0xbec42d48 } 0xbec42d48->Object::~Object { 0x11890b0, 0x11890c0, 0xbec42cc0, 0xbec42d48 } Shifting token 'a' (0xbec42cc0 'a') 0x11890d0->Object::Object { 0x11890b0, 0x11890c0, 0xbec42cc0 } 0xbec42cc0->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42cc0 } Entering state 1 Stack now 0 10 10 1 0xbec42d58->Object::Object { 0x11890b0, 0x11890c0, 0x11890d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x11890d0 'a') -> $$ = nterm item (0xbec42d58 'a') 0x11890d0->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42d58 } 0x11890d0->Object::Object { 0x11890b0, 0x11890c0, 0xbec42d58 } 0xbec42d58->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42d58 } Entering state 10 Stack now 0 10 10 10 Reading a token 0xbec42cc4->Object::Object { 0x11890b0, 0x11890c0, 0x11890d0 } 0xbec42d48->Object::Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42cc4 } 0xbec42cc4->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42cc4, 0xbec42d48 } Next token is token 'a' (0xbec42d48 'a') 0xbec42cc0->Object::Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42d48 } 0xbec42d48->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42cc0, 0xbec42d48 } Shifting token 'a' (0xbec42cc0 'a') 0x11890e0->Object::Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42cc0 } 0xbec42cc0->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0x11890e0, 0xbec42cc0 } Entering state 1 Stack now 0 10 10 10 1 0xbec42d58->Object::Object { 0x11890b0, 0x11890c0, 0x11890d0, 0x11890e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x11890e0 'a') -> $$ = nterm item (0xbec42d58 'a') 0x11890e0->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0x11890e0, 0xbec42d58 } 0x11890e0->Object::Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42d58 } 0xbec42d58->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0x11890e0, 0xbec42d58 } Entering state 10 Stack now 0 10 10 10 10 Reading a token 0xbec42cc4->Object::Object { 0x11890b0, 0x11890c0, 0x11890d0, 0x11890e0 } 0xbec42d48->Object::Object { 0x11890b0, 0x11890c0, 0x11890d0, 0x11890e0, 0xbec42cc4 } 0xbec42cc4->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0x11890e0, 0xbec42cc4, 0xbec42d48 } Next token is token 'p' (0xbec42d48 'p'Exception caught: cleaning lookahead and stack 0x11890e0->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0x11890e0, 0xbec42d48 } 0x11890d0->Object::~Object { 0x11890b0, 0x11890c0, 0x11890d0, 0xbec42d48 } 0x11890c0->Object::~Object { 0x11890b0, 0x11890c0, 0xbec42d48 } 0x11890b0->Object::~Object { 0x11890b0, 0xbec42d48 } 0xbec42d48->Object::~Object { 0xbec42d48 } exception caught: printer end { } ./c++.at:1363: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1363: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaT stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaR stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1363: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ======== Testing with C++ standard flags: '' ./c++.at:1411: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:63, from input.cc:148: /usr/include/c++/12/bits/stl_uninitialized.h: In function '_ForwardIterator std::__do_uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]': /usr/include/c++/12/bits/stl_uninitialized.h:113:5: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 113 | __do_uninit_copy(_InputIterator __first, _InputIterator __last, | ^~~~~~~~~~~~~~~~ /usr/include/c++/12/bits/stl_uninitialized.h:113:5: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 In file included from /usr/include/c++/12/vector:70: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In static member function 'static _ForwardIterator std::__uninitialized_copy<_TrivialValueTypes>::__uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = std::move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; bool _TrivialValueTypes = false]', inlined from '_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]' at /usr/include/c++/12/bits/stl_uninitialized.h:185:15, inlined from '_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; _Tp = yy::parser::stack_symbol_type]' at /usr/include/c++/12/bits/stl_uninitialized.h:372:37, inlined from '_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = yy::parser::stack_symbol_type*; _ForwardIterator = yy::parser::stack_symbol_type*; _Allocator = allocator]' at /usr/include/c++/12/bits/stl_uninitialized.h:397:2, inlined from 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/vector.tcc:487:3: /usr/include/c++/12/bits/stl_uninitialized.h:137:39: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 137 | { return std::__do_uninit_copy(__first, __last, __result); } | ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~ In static member function 'static _ForwardIterator std::__uninitialized_copy<_TrivialValueTypes>::__uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = std::move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; bool _TrivialValueTypes = false]', inlined from '_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]' at /usr/include/c++/12/bits/stl_uninitialized.h:185:15, inlined from '_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; _Tp = yy::parser::stack_symbol_type]' at /usr/include/c++/12/bits/stl_uninitialized.h:372:37, inlined from '_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = yy::parser::stack_symbol_type*; _ForwardIterator = yy::parser::stack_symbol_type*; _Allocator = allocator]' at /usr/include/c++/12/bits/stl_uninitialized.h:397:2, inlined from 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/vector.tcc:494:3: /usr/include/c++/12/bits/stl_uninitialized.h:137:39: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 137 | { return std::__do_uninit_copy(__first, __last, __result); } | ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19, inlined from 'virtual int yy::parser::parse()' at input.cc:2039:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:1362: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaap stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbea3fcbc->Object::Object { } 0xbea3fd48->Object::Object { 0xbea3fcbc } 0xbea3fcbc->Object::~Object { 0xbea3fcbc, 0xbea3fd48 } Next token is token 'a' (0xbea3fd48 'a') 0xbea3fcb8->Object::Object { 0xbea3fd48 } 0xbea3fd48->Object::~Object { 0xbea3fcb8, 0xbea3fd48 } Shifting token 'a' (0xbea3fcb8 'a') 0x5b80b0->Object::Object { 0xbea3fcb8 } 0xbea3fcb8->Object::~Object { 0x5b80b0, 0xbea3fcb8 } Entering state 2 Stack now 0 2 0xbea3fd58->Object::Object { 0x5b80b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x5b80b0 'a') -> $$ = nterm item (0xbea3fd58 'a') 0x5b80b0->Object::~Object { 0x5b80b0, 0xbea3fd58 } 0x5b80b0->Object::Object { 0xbea3fd58 } 0xbea3fd58->Object::~Object { 0x5b80b0, 0xbea3fd58 } Entering state 11 Stack now 0 11 Reading a token 0xbea3fcbc->Object::Object { 0x5b80b0 } 0xbea3fd48->Object::Object { 0x5b80b0, 0xbea3fcbc } 0xbea3fcbc->Object::~Object { 0x5b80b0, 0xbea3fcbc, 0xbea3fd48 } Next token is token 'a' (0xbea3fd48 'a') 0xbea3fcb8->Object::Object { 0x5b80b0, 0xbea3fd48 } 0xbea3fd48->Object::~Object { 0x5b80b0, 0xbea3fcb8, 0xbea3fd48 } Shifting token 'a' (0xbea3fcb8 'a') 0x5b80c0->Object::Object { 0x5b80b0, 0xbea3fcb8 } 0xbea3fcb8->Object::~Object { 0x5b80b0, 0x5b80c0, 0xbea3fcb8 } Entering state 2 Stack now 0 11 2 0xbea3fd58->Object::Object { 0x5b80b0, 0x5b80c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x5b80c0 'a') -> $$ = nterm item (0xbea3fd58 'a') 0x5b80c0->Object::~Object { 0x5b80b0, 0x5b80c0, 0xbea3fd58 } 0x5b80c0->Object::Object { 0x5b80b0, 0xbea3fd58 } 0xbea3fd58->Object::~Object { 0x5b80b0, 0x5b80c0, 0xbea3fd58 } Entering state 11 Stack now 0 11 11 Reading a token 0xbea3fcbc->Object::Object { 0x5b80b0, 0x5b80c0 } 0xbea3fd48->Object::Object { 0x5b80b0, 0x5b80c0, 0xbea3fcbc } 0xbea3fcbc->Object::~Object { 0x5b80b0, 0x5b80c0, 0xbea3fcbc, 0xbea3fd48 } Next token is token 'a' (0xbea3fd48 'a') 0xbea3fcb8->Object::Object { 0x5b80b0, 0x5b80c0, 0xbea3fd48 } 0xbea3fd48->Object::~Object { 0x5b80b0, 0x5b80c0, 0xbea3fcb8, 0xbea3fd48 } Shifting token 'a' (0xbea3fcb8 'a') 0x5b80d0->Object::Object { 0x5b80b0, 0x5b80c0, 0xbea3fcb8 } 0xbea3fcb8->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fcb8 } Entering state 2 Stack now 0 11 11 2 0xbea3fd58->Object::Object { 0x5b80b0, 0x5b80c0, 0x5b80d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x5b80d0 'a') -> $$ = nterm item (0xbea3fd58 'a') 0x5b80d0->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fd58 } 0x5b80d0->Object::Object { 0x5b80b0, 0x5b80c0, 0xbea3fd58 } 0xbea3fd58->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fd58 } Entering state 11 Stack now 0 11 11 11 Reading a token 0xbea3fcbc->Object::Object { 0x5b80b0, 0x5b80c0, 0x5b80d0 } 0xbea3fd48->Object::Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fcbc } 0xbea3fcbc->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fcbc, 0xbea3fd48 } Next token is token 'a' (0xbea3fd48 'a') 0xbea3fcb8->Object::Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fd48 } 0xbea3fd48->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fcb8, 0xbea3fd48 } Shifting token 'a' (0xbea3fcb8 'a') 0x5b80e0->Object::Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fcb8 } 0xbea3fcb8->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0x5b80e0, 0xbea3fcb8 } Entering state 2 Stack now 0 11 11 11 2 0xbea3fd58->Object::Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0x5b80e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x5b80e0 'a') -> $$ = nterm item (0xbea3fd58 'a') 0x5b80e0->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0x5b80e0, 0xbea3fd58 } 0x5b80e0->Object::Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fd58 } 0xbea3fd58->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0x5b80e0, 0xbea3fd58 } Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xbea3fcbc->Object::Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0x5b80e0 } 0xbea3fd48->Object::Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0x5b80e0, 0xbea3fcbc } 0xbea3fcbc->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0x5b80e0, 0xbea3fcbc, 0xbea3fd48 } Next token is token 'p' (0xbea3fd48 'p'Exception caught: cleaning lookahead and stack 0x5b80e0->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0x5b80e0, 0xbea3fd48 } 0x5b80d0->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fd48 } 0x5b80c0->Object::~Object { 0x5b80b0, 0x5b80c0, 0xbea3fd48 } 0x5b80b0->Object::~Object { 0x5b80b0, 0xbea3fd48 } 0xbea3fd48->Object::~Object { 0xbea3fd48 } exception caught: printer end { } ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbea3fcbc->Object::Object { } 0xbea3fd48->Object::Object { 0xbea3fcbc } 0xbea3fcbc->Object::~Object { 0xbea3fcbc, 0xbea3fd48 } Next token is token 'a' (0xbea3fd48 'a') 0xbea3fcb8->Object::Object { 0xbea3fd48 } 0xbea3fd48->Object::~Object { 0xbea3fcb8, 0xbea3fd48 } Shifting token 'a' (0xbea3fcb8 'a') 0x5b80b0->Object::Object { 0xbea3fcb8 } 0xbea3fcb8->Object::~Object { 0x5b80b0, 0xbea3fcb8 } Entering state 2 Stack now 0 2 0xbea3fd58->Object::Object { 0x5b80b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x5b80b0 'a') -> $$ = nterm item (0xbea3fd58 'a') 0x5b80b0->Object::~Object { 0x5b80b0, 0xbea3fd58 } 0x5b80b0->Object::Object { 0xbea3fd58 } 0xbea3fd58->Object::~Object { 0x5b80b0, 0xbea3fd58 } Entering state 11 Stack now 0 11 Reading a token 0xbea3fcbc->Object::Object { 0x5b80b0 } 0xbea3fd48->Object::Object { 0x5b80b0, 0xbea3fcbc } 0xbea3fcbc->Object::~Object { 0x5b80b0, 0xbea3fcbc, 0xbea3fd48 } Next token is token 'a' (0xbea3fd48 'a') 0xbea3fcb8->Object::Object { 0x5b80b0, 0xbea3fd48 } 0xbea3fd48->Object::~Object { 0x5b80b0, 0xbea3fcb8, 0xbea3fd48 } Shifting token 'a' (0xbea3fcb8 'a') 0x5b80c0->Object::Object { 0x5b80b0, 0xbea3fcb8 } 0xbea3fcb8->Object::~Object { 0x5b80b0, 0x5b80c0, 0xbea3fcb8 } Entering state 2 Stack now 0 11 2 0xbea3fd58->Object::Object { 0x5b80b0, 0x5b80c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x5b80c0 'a') -> $$ = nterm item (0xbea3fd58 'a') 0x5b80c0->Object::~Object { 0x5b80b0, 0x5b80c0, 0xbea3fd58 } 0x5b80c0->Object::Object { 0x5b80b0, 0xbea3fd58 } 0xbea3fd58->Object::~Object { 0x5b80b0, 0x5b80c0, 0xbea3fd58 } Entering state 11 Stack now 0 11 11 Reading a token 0xbea3fcbc->Object::Object { 0x5b80b0, 0x5b80c0 } 0xbea3fd48->Object::Object { 0x5b80b0, 0x5b80c0, 0xbea3fcbc } 0xbea3fcbc->Object::~Object { 0x5b80b0, 0x5b80c0, 0xbea3fcbc, 0xbea3fd48 } Next token is token 'a' (0xbea3fd48 'a') 0xbea3fcb8->Object::Object { 0x5b80b0, 0x5b80c0, 0xbea3fd48 } 0xbea3fd48->Object::~Object { 0x5b80b0, 0x5b80c0, 0xbea3fcb8, 0xbea3fd48 } Shifting token 'a' (0xbea3fcb8 'a') 0x5b80d0->Object::Object { 0x5b80b0, 0x5b80c0, 0xbea3fcb8 } 0xbea3fcb8->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fcb8 } Entering state 2 Stack now 0 11 11 2 0xbea3fd58->Object::Object { 0x5b80b0, 0x5b80c0, 0x5b80d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x5b80d0 'a') -> $$ = nterm item (0xbea3fd58 'a') 0x5b80d0->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fd58 } 0x5b80d0->Object::Object { 0x5b80b0, 0x5b80c0, 0xbea3fd58 } 0xbea3fd58->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fd58 } Entering state 11 Stack now 0 11 11 11 Reading a token 0xbea3fcbc->Object::Object { 0x5b80b0, 0x5b80c0, 0x5b80d0 } 0xbea3fd48->Object::Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fcbc } 0xbea3fcbc->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fcbc, 0xbea3fd48 } Next token is token 'a' (0xbea3fd48 'a') 0xbea3fcb8->Object::Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fd48 } 0xbea3fd48->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fcb8, 0xbea3fd48 } Shifting token 'a' (0xbea3fcb8 'a') 0x5b80e0->Object::Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fcb8 } 0xbea3fcb8->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0x5b80e0, 0xbea3fcb8 } Entering state 2 Stack now 0 11 11 11 2 0xbea3fd58->Object::Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0x5b80e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x5b80e0 'a') -> $$ = nterm item (0xbea3fd58 'a') 0x5b80e0->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0x5b80e0, 0xbea3fd58 } 0x5b80e0->Object::Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fd58 } 0xbea3fd58->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0x5b80e0, 0xbea3fd58 } Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xbea3fcbc->Object::Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0x5b80e0 } 0xbea3fd48->Object::Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0x5b80e0, 0xbea3fcbc } 0xbea3fcbc->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0x5b80e0, 0xbea3fcbc, 0xbea3fd48 } Next token is token 'p' (0xbea3fd48 'p'Exception caught: cleaning lookahead and stack 0x5b80e0->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0x5b80e0, 0xbea3fd48 } 0x5b80d0->Object::~Object { 0x5b80b0, 0x5b80c0, 0x5b80d0, 0xbea3fd48 } 0x5b80c0->Object::~Object { 0x5b80b0, 0x5b80c0, 0xbea3fd48 } 0x5b80b0->Object::~Object { 0x5b80b0, 0xbea3fd48 } 0xbea3fd48->Object::~Object { 0xbea3fd48 } exception caught: printer end { } ./c++.at:1362: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1362: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaT stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaR stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1362: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ======== Testing with C++ standard flags: '' ./c++.at:1411: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:63, from input.cc:148: /usr/include/c++/12/bits/stl_uninitialized.h: In function '_ForwardIterator std::__do_uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]': /usr/include/c++/12/bits/stl_uninitialized.h:113:5: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 113 | __do_uninit_copy(_InputIterator __first, _InputIterator __last, | ^~~~~~~~~~~~~~~~ /usr/include/c++/12/bits/stl_uninitialized.h:113:5: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 In file included from /usr/include/c++/12/vector:70: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In static member function 'static _ForwardIterator std::__uninitialized_copy<_TrivialValueTypes>::__uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = std::move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; bool _TrivialValueTypes = false]', inlined from '_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]' at /usr/include/c++/12/bits/stl_uninitialized.h:185:15, inlined from '_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; _Tp = yy::parser::stack_symbol_type]' at /usr/include/c++/12/bits/stl_uninitialized.h:372:37, inlined from '_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = yy::parser::stack_symbol_type*; _ForwardIterator = yy::parser::stack_symbol_type*; _Allocator = allocator]' at /usr/include/c++/12/bits/stl_uninitialized.h:397:2, inlined from 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/vector.tcc:487:3: /usr/include/c++/12/bits/stl_uninitialized.h:137:39: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 137 | { return std::__do_uninit_copy(__first, __last, __result); } | ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~ In static member function 'static _ForwardIterator std::__uninitialized_copy<_TrivialValueTypes>::__uninit_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = std::move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; bool _TrivialValueTypes = false]', inlined from '_ForwardIterator std::uninitialized_copy(_InputIterator, _InputIterator, _ForwardIterator) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*]' at /usr/include/c++/12/bits/stl_uninitialized.h:185:15, inlined from '_ForwardIterator std::__uninitialized_copy_a(_InputIterator, _InputIterator, _ForwardIterator, allocator<_Tp>&) [with _InputIterator = move_iterator; _ForwardIterator = yy::parser::stack_symbol_type*; _Tp = yy::parser::stack_symbol_type]' at /usr/include/c++/12/bits/stl_uninitialized.h:372:37, inlined from '_ForwardIterator std::__uninitialized_move_if_noexcept_a(_InputIterator, _InputIterator, _ForwardIterator, _Allocator&) [with _InputIterator = yy::parser::stack_symbol_type*; _ForwardIterator = yy::parser::stack_symbol_type*; _Allocator = allocator]' at /usr/include/c++/12/bits/stl_uninitialized.h:397:2, inlined from 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/vector.tcc:494:3: /usr/include/c++/12/bits/stl_uninitialized.h:137:39: note: parameter passing for argument of type 'std::move_iterator' changed in GCC 7.1 137 | { return std::__do_uninit_copy(__first, __last, __result); } | ~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19, inlined from 'virtual int yy::parser::parse()' at input.cc:2033:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:1363: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaap stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbe93bcc4->Object::Object { } 0xbe93bd48->Object::Object { 0xbe93bcc4 } 0xbe93bcc4->Object::~Object { 0xbe93bcc4, 0xbe93bd48 } Next token is token 'a' (0xbe93bd48 'a') 0xbe93bcc0->Object::Object { 0xbe93bd48 } 0xbe93bd48->Object::~Object { 0xbe93bcc0, 0xbe93bd48 } Shifting token 'a' (0xbe93bcc0 'a') 0x24410b0->Object::Object { 0xbe93bcc0 } 0xbe93bcc0->Object::~Object { 0x24410b0, 0xbe93bcc0 } Entering state 1 Stack now 0 1 0xbe93bd58->Object::Object { 0x24410b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x24410b0 'a') -> $$ = nterm item (0xbe93bd58 'a') 0x24410b0->Object::~Object { 0x24410b0, 0xbe93bd58 } 0x24410b0->Object::Object { 0xbe93bd58 } 0xbe93bd58->Object::~Object { 0x24410b0, 0xbe93bd58 } Entering state 10 Stack now 0 10 Reading a token 0xbe93bcc4->Object::Object { 0x24410b0 } 0xbe93bd48->Object::Object { 0x24410b0, 0xbe93bcc4 } 0xbe93bcc4->Object::~Object { 0x24410b0, 0xbe93bcc4, 0xbe93bd48 } Next token is token 'a' (0xbe93bd48 'a') 0xbe93bcc0->Object::Object { 0x24410b0, 0xbe93bd48 } 0xbe93bd48->Object::~Object { 0x24410b0, 0xbe93bcc0, 0xbe93bd48 } Shifting token 'a' (0xbe93bcc0 'a') 0x24410c0->Object::Object { 0x24410b0, 0xbe93bcc0 } 0xbe93bcc0->Object::~Object { 0x24410b0, 0x24410c0, 0xbe93bcc0 } Entering state 1 Stack now 0 10 1 0xbe93bd58->Object::Object { 0x24410b0, 0x24410c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x24410c0 'a') -> $$ = nterm item (0xbe93bd58 'a') 0x24410c0->Object::~Object { 0x24410b0, 0x24410c0, 0xbe93bd58 } 0x24410c0->Object::Object { 0x24410b0, 0xbe93bd58 } 0xbe93bd58->Object::~Object { 0x24410b0, 0x24410c0, 0xbe93bd58 } Entering state 10 Stack now 0 10 10 Reading a token 0xbe93bcc4->Object::Object { 0x24410b0, 0x24410c0 } 0xbe93bd48->Object::Object { 0x24410b0, 0x24410c0, 0xbe93bcc4 } 0xbe93bcc4->Object::~Object { 0x24410b0, 0x24410c0, 0xbe93bcc4, 0xbe93bd48 } Next token is token 'a' (0xbe93bd48 'a') 0xbe93bcc0->Object::Object { 0x24410b0, 0x24410c0, 0xbe93bd48 } 0xbe93bd48->Object::~Object { 0x24410b0, 0x24410c0, 0xbe93bcc0, 0xbe93bd48 } Shifting token 'a' (0xbe93bcc0 'a') 0x24410d0->Object::Object { 0x24410b0, 0x24410c0, 0xbe93bcc0 } 0xbe93bcc0->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bcc0 } Entering state 1 Stack now 0 10 10 1 0xbe93bd58->Object::Object { 0x24410b0, 0x24410c0, 0x24410d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x24410d0 'a') -> $$ = nterm item (0xbe93bd58 'a') 0x24410d0->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bd58 } 0x24410d0->Object::Object { 0x24410b0, 0x24410c0, 0xbe93bd58 } 0xbe93bd58->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bd58 } Entering state 10 Stack now 0 10 10 10 Reading a token 0xbe93bcc4->Object::Object { 0x24410b0, 0x24410c0, 0x24410d0 } 0xbe93bd48->Object::Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bcc4 } 0xbe93bcc4->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bcc4, 0xbe93bd48 } Next token is token 'a' (0xbe93bd48 'a') 0xbe93bcc0->Object::Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bd48 } 0xbe93bd48->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bcc0, 0xbe93bd48 } Shifting token 'a' (0xbe93bcc0 'a') 0x24410e0->Object::Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bcc0 } 0xbe93bcc0->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0x24410e0, 0xbe93bcc0 } Entering state 1 Stack now 0 10 10 10 1 0xbe93bd58->Object::Object { 0x24410b0, 0x24410c0, 0x24410d0, 0x24410e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x24410e0 'a') -> $$ = nterm item (0xbe93bd58 'a') 0x24410e0->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0x24410e0, 0xbe93bd58 } 0x24410e0->Object::Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bd58 } 0xbe93bd58->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0x24410e0, 0xbe93bd58 } Entering state 10 Stack now 0 10 10 10 10 Reading a token 0xbe93bcc4->Object::Object { 0x24410b0, 0x24410c0, 0x24410d0, 0x24410e0 } 0xbe93bd48->Object::Object { 0x24410b0, 0x24410c0, 0x24410d0, 0x24410e0, 0xbe93bcc4 } 0xbe93bcc4->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0x24410e0, 0xbe93bcc4, 0xbe93bd48 } Next token is token 'p' (0xbe93bd48 'p'Exception caught: cleaning lookahead and stack 0x24410e0->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0x24410e0, 0xbe93bd48 } 0x24410d0->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bd48 } 0x24410c0->Object::~Object { 0x24410b0, 0x24410c0, 0xbe93bd48 } 0x24410b0->Object::~Object { 0x24410b0, 0xbe93bd48 } 0xbe93bd48->Object::~Object { 0xbe93bd48 } exception caught: printer end { } ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbe93bcc4->Object::Object { } 0xbe93bd48->Object::Object { 0xbe93bcc4 } 0xbe93bcc4->Object::~Object { 0xbe93bcc4, 0xbe93bd48 } Next token is token 'a' (0xbe93bd48 'a') 0xbe93bcc0->Object::Object { 0xbe93bd48 } 0xbe93bd48->Object::~Object { 0xbe93bcc0, 0xbe93bd48 } Shifting token 'a' (0xbe93bcc0 'a') 0x24410b0->Object::Object { 0xbe93bcc0 } 0xbe93bcc0->Object::~Object { 0x24410b0, 0xbe93bcc0 } Entering state 1 Stack now 0 1 0xbe93bd58->Object::Object { 0x24410b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x24410b0 'a') -> $$ = nterm item (0xbe93bd58 'a') 0x24410b0->Object::~Object { 0x24410b0, 0xbe93bd58 } 0x24410b0->Object::Object { 0xbe93bd58 } 0xbe93bd58->Object::~Object { 0x24410b0, 0xbe93bd58 } Entering state 10 Stack now 0 10 Reading a token 0xbe93bcc4->Object::Object { 0x24410b0 } 0xbe93bd48->Object::Object { 0x24410b0, 0xbe93bcc4 } 0xbe93bcc4->Object::~Object { 0x24410b0, 0xbe93bcc4, 0xbe93bd48 } Next token is token 'a' (0xbe93bd48 'a') 0xbe93bcc0->Object::Object { 0x24410b0, 0xbe93bd48 } 0xbe93bd48->Object::~Object { 0x24410b0, 0xbe93bcc0, 0xbe93bd48 } Shifting token 'a' (0xbe93bcc0 'a') 0x24410c0->Object::Object { 0x24410b0, 0xbe93bcc0 } 0xbe93bcc0->Object::~Object { 0x24410b0, 0x24410c0, 0xbe93bcc0 } Entering state 1 Stack now 0 10 1 0xbe93bd58->Object::Object { 0x24410b0, 0x24410c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x24410c0 'a') -> $$ = nterm item (0xbe93bd58 'a') 0x24410c0->Object::~Object { 0x24410b0, 0x24410c0, 0xbe93bd58 } 0x24410c0->Object::Object { 0x24410b0, 0xbe93bd58 } 0xbe93bd58->Object::~Object { 0x24410b0, 0x24410c0, 0xbe93bd58 } Entering state 10 Stack now 0 10 10 Reading a token 0xbe93bcc4->Object::Object { 0x24410b0, 0x24410c0 } 0xbe93bd48->Object::Object { 0x24410b0, 0x24410c0, 0xbe93bcc4 } 0xbe93bcc4->Object::~Object { 0x24410b0, 0x24410c0, 0xbe93bcc4, 0xbe93bd48 } Next token is token 'a' (0xbe93bd48 'a') 0xbe93bcc0->Object::Object { 0x24410b0, 0x24410c0, 0xbe93bd48 } 0xbe93bd48->Object::~Object { 0x24410b0, 0x24410c0, 0xbe93bcc0, 0xbe93bd48 } Shifting token 'a' (0xbe93bcc0 'a') 0x24410d0->Object::Object { 0x24410b0, 0x24410c0, 0xbe93bcc0 } 0xbe93bcc0->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bcc0 } Entering state 1 Stack now 0 10 10 1 0xbe93bd58->Object::Object { 0x24410b0, 0x24410c0, 0x24410d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x24410d0 'a') -> $$ = nterm item (0xbe93bd58 'a') 0x24410d0->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bd58 } 0x24410d0->Object::Object { 0x24410b0, 0x24410c0, 0xbe93bd58 } 0xbe93bd58->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bd58 } Entering state 10 Stack now 0 10 10 10 Reading a token 0xbe93bcc4->Object::Object { 0x24410b0, 0x24410c0, 0x24410d0 } 0xbe93bd48->Object::Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bcc4 } 0xbe93bcc4->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bcc4, 0xbe93bd48 } Next token is token 'a' (0xbe93bd48 'a') 0xbe93bcc0->Object::Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bd48 } 0xbe93bd48->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bcc0, 0xbe93bd48 } Shifting token 'a' (0xbe93bcc0 'a') 0x24410e0->Object::Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bcc0 } 0xbe93bcc0->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0x24410e0, 0xbe93bcc0 } Entering state 1 Stack now 0 10 10 10 1 0xbe93bd58->Object::Object { 0x24410b0, 0x24410c0, 0x24410d0, 0x24410e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x24410e0 'a') -> $$ = nterm item (0xbe93bd58 'a') 0x24410e0->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0x24410e0, 0xbe93bd58 } 0x24410e0->Object::Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bd58 } 0xbe93bd58->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0x24410e0, 0xbe93bd58 } Entering state 10 Stack now 0 10 10 10 10 Reading a token 0xbe93bcc4->Object::Object { 0x24410b0, 0x24410c0, 0x24410d0, 0x24410e0 } 0xbe93bd48->Object::Object { 0x24410b0, 0x24410c0, 0x24410d0, 0x24410e0, 0xbe93bcc4 } 0xbe93bcc4->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0x24410e0, 0xbe93bcc4, 0xbe93bd48 } Next token is token 'p' (0xbe93bd48 'p'Exception caught: cleaning lookahead and stack 0x24410e0->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0x24410e0, 0xbe93bd48 } 0x24410d0->Object::~Object { 0x24410b0, 0x24410c0, 0x24410d0, 0xbe93bd48 } 0x24410c0->Object::~Object { 0x24410b0, 0x24410c0, 0xbe93bd48 } 0x24410b0->Object::~Object { 0x24410b0, 0xbe93bd48 } 0xbe93bd48->Object::~Object { 0xbe93bd48 } exception caught: printer end { } ./c++.at:1363: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1363: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaT stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaR stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1363: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ======== Testing with C++ standard flags: '' ./c++.at:1411: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ======== Testing with C++ standard flags: '' ./c++.at:1411: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.cc:148: /usr/include/c++/12/bits/vector.tcc: In member function 'constexpr void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'constexpr std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'constexpr void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'constexpr std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'constexpr void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19, inlined from 'virtual int yy::parser::parse()' at input.cc:2039:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:1362: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaap stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbee2ecc4->Object::Object { } 0xbee2ed48->Object::Object { 0xbee2ecc4 } 0xbee2ecc4->Object::~Object { 0xbee2ecc4, 0xbee2ed48 } Next token is token 'a' (0xbee2ed48 'a') 0xbee2ecc0->Object::Object { 0xbee2ed48 } 0xbee2ed48->Object::~Object { 0xbee2ecc0, 0xbee2ed48 } Shifting token 'a' (0xbee2ecc0 'a') 0x4b00b0->Object::Object { 0xbee2ecc0 } 0xbee2ecc0->Object::~Object { 0x4b00b0, 0xbee2ecc0 } Entering state 2 Stack now 0 2 0xbee2ed58->Object::Object { 0x4b00b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x4b00b0 'a') -> $$ = nterm item (0xbee2ed58 'a') 0x4b00b0->Object::~Object { 0x4b00b0, 0xbee2ed58 } 0x4b00b0->Object::Object { 0xbee2ed58 } 0xbee2ed58->Object::~Object { 0x4b00b0, 0xbee2ed58 } Entering state 11 Stack now 0 11 Reading a token 0xbee2ecc4->Object::Object { 0x4b00b0 } 0xbee2ed48->Object::Object { 0x4b00b0, 0xbee2ecc4 } 0xbee2ecc4->Object::~Object { 0x4b00b0, 0xbee2ecc4, 0xbee2ed48 } Next token is token 'a' (0xbee2ed48 'a') 0xbee2ecc0->Object::Object { 0x4b00b0, 0xbee2ed48 } 0xbee2ed48->Object::~Object { 0x4b00b0, 0xbee2ecc0, 0xbee2ed48 } Shifting token 'a' (0xbee2ecc0 'a') 0x4b00c0->Object::Object { 0x4b00b0, 0xbee2ecc0 } 0xbee2ecc0->Object::~Object { 0x4b00b0, 0x4b00c0, 0xbee2ecc0 } Entering state 2 Stack now 0 11 2 0xbee2ed58->Object::Object { 0x4b00b0, 0x4b00c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x4b00c0 'a') -> $$ = nterm item (0xbee2ed58 'a') 0x4b00c0->Object::~Object { 0x4b00b0, 0x4b00c0, 0xbee2ed58 } 0x4b00c0->Object::Object { 0x4b00b0, 0xbee2ed58 } 0xbee2ed58->Object::~Object { 0x4b00b0, 0x4b00c0, 0xbee2ed58 } Entering state 11 Stack now 0 11 11 Reading a token 0xbee2ecc4->Object::Object { 0x4b00b0, 0x4b00c0 } 0xbee2ed48->Object::Object { 0x4b00b0, 0x4b00c0, 0xbee2ecc4 } 0xbee2ecc4->Object::~Object { 0x4b00b0, 0x4b00c0, 0xbee2ecc4, 0xbee2ed48 } Next token is token 'a' (0xbee2ed48 'a') 0xbee2ecc0->Object::Object { 0x4b00b0, 0x4b00c0, 0xbee2ed48 } 0xbee2ed48->Object::~Object { 0x4b00b0, 0x4b00c0, 0xbee2ecc0, 0xbee2ed48 } Shifting token 'a' (0xbee2ecc0 'a') 0x4b00d0->Object::Object { 0x4b00b0, 0x4b00c0, 0xbee2ecc0 } 0xbee2ecc0->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ecc0 } Entering state 2 Stack now 0 11 11 2 0xbee2ed58->Object::Object { 0x4b00b0, 0x4b00c0, 0x4b00d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x4b00d0 'a') -> $$ = nterm item (0xbee2ed58 'a') 0x4b00d0->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ed58 } 0x4b00d0->Object::Object { 0x4b00b0, 0x4b00c0, 0xbee2ed58 } 0xbee2ed58->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ed58 } Entering state 11 Stack now 0 11 11 11 Reading a token 0xbee2ecc4->Object::Object { 0x4b00b0, 0x4b00c0, 0x4b00d0 } 0xbee2ed48->Object::Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ecc4 } 0xbee2ecc4->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ecc4, 0xbee2ed48 } Next token is token 'a' (0xbee2ed48 'a') 0xbee2ecc0->Object::Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ed48 } 0xbee2ed48->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ecc0, 0xbee2ed48 } Shifting token 'a' (0xbee2ecc0 'a') 0x4b00e0->Object::Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ecc0 } 0xbee2ecc0->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0x4b00e0, 0xbee2ecc0 } Entering state 2 Stack now 0 11 11 11 2 0xbee2ed58->Object::Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0x4b00e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x4b00e0 'a') -> $$ = nterm item (0xbee2ed58 'a') 0x4b00e0->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0x4b00e0, 0xbee2ed58 } 0x4b00e0->Object::Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ed58 } 0xbee2ed58->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0x4b00e0, 0xbee2ed58 } Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xbee2ecc4->Object::Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0x4b00e0 } 0xbee2ed48->Object::Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0x4b00e0, 0xbee2ecc4 } 0xbee2ecc4->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0x4b00e0, 0xbee2ecc4, 0xbee2ed48 } Next token is token 'p' (0xbee2ed48 'p'Exception caught: cleaning lookahead and stack 0x4b00e0->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0x4b00e0, 0xbee2ed48 } 0x4b00d0->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ed48 } 0x4b00c0->Object::~Object { 0x4b00b0, 0x4b00c0, 0xbee2ed48 } 0x4b00b0->Object::~Object { 0x4b00b0, 0xbee2ed48 } 0xbee2ed48->Object::~Object { 0xbee2ed48 } exception caught: printer end { } ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbee2ecc4->Object::Object { } 0xbee2ed48->Object::Object { 0xbee2ecc4 } 0xbee2ecc4->Object::~Object { 0xbee2ecc4, 0xbee2ed48 } Next token is token 'a' (0xbee2ed48 'a') 0xbee2ecc0->Object::Object { 0xbee2ed48 } 0xbee2ed48->Object::~Object { 0xbee2ecc0, 0xbee2ed48 } Shifting token 'a' (0xbee2ecc0 'a') 0x4b00b0->Object::Object { 0xbee2ecc0 } 0xbee2ecc0->Object::~Object { 0x4b00b0, 0xbee2ecc0 } Entering state 2 Stack now 0 2 0xbee2ed58->Object::Object { 0x4b00b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x4b00b0 'a') -> $$ = nterm item (0xbee2ed58 'a') 0x4b00b0->Object::~Object { 0x4b00b0, 0xbee2ed58 } 0x4b00b0->Object::Object { 0xbee2ed58 } 0xbee2ed58->Object::~Object { 0x4b00b0, 0xbee2ed58 } Entering state 11 Stack now 0 11 Reading a token 0xbee2ecc4->Object::Object { 0x4b00b0 } 0xbee2ed48->Object::Object { 0x4b00b0, 0xbee2ecc4 } 0xbee2ecc4->Object::~Object { 0x4b00b0, 0xbee2ecc4, 0xbee2ed48 } Next token is token 'a' (0xbee2ed48 'a') 0xbee2ecc0->Object::Object { 0x4b00b0, 0xbee2ed48 } 0xbee2ed48->Object::~Object { 0x4b00b0, 0xbee2ecc0, 0xbee2ed48 } Shifting token 'a' (0xbee2ecc0 'a') 0x4b00c0->Object::Object { 0x4b00b0, 0xbee2ecc0 } 0xbee2ecc0->Object::~Object { 0x4b00b0, 0x4b00c0, 0xbee2ecc0 } Entering state 2 Stack now 0 11 2 0xbee2ed58->Object::Object { 0x4b00b0, 0x4b00c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x4b00c0 'a') -> $$ = nterm item (0xbee2ed58 'a') 0x4b00c0->Object::~Object { 0x4b00b0, 0x4b00c0, 0xbee2ed58 } 0x4b00c0->Object::Object { 0x4b00b0, 0xbee2ed58 } 0xbee2ed58->Object::~Object { 0x4b00b0, 0x4b00c0, 0xbee2ed58 } Entering state 11 Stack now 0 11 11 Reading a token 0xbee2ecc4->Object::Object { 0x4b00b0, 0x4b00c0 } 0xbee2ed48->Object::Object { 0x4b00b0, 0x4b00c0, 0xbee2ecc4 } 0xbee2ecc4->Object::~Object { 0x4b00b0, 0x4b00c0, 0xbee2ecc4, 0xbee2ed48 } Next token is token 'a' (0xbee2ed48 'a') 0xbee2ecc0->Object::Object { 0x4b00b0, 0x4b00c0, 0xbee2ed48 } 0xbee2ed48->Object::~Object { 0x4b00b0, 0x4b00c0, 0xbee2ecc0, 0xbee2ed48 } Shifting token 'a' (0xbee2ecc0 'a') 0x4b00d0->Object::Object { 0x4b00b0, 0x4b00c0, 0xbee2ecc0 } 0xbee2ecc0->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ecc0 } Entering state 2 Stack now 0 11 11 2 0xbee2ed58->Object::Object { 0x4b00b0, 0x4b00c0, 0x4b00d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x4b00d0 'a') -> $$ = nterm item (0xbee2ed58 'a') 0x4b00d0->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ed58 } 0x4b00d0->Object::Object { 0x4b00b0, 0x4b00c0, 0xbee2ed58 } 0xbee2ed58->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ed58 } Entering state 11 Stack now 0 11 11 11 Reading a token 0xbee2ecc4->Object::Object { 0x4b00b0, 0x4b00c0, 0x4b00d0 } 0xbee2ed48->Object::Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ecc4 } 0xbee2ecc4->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ecc4, 0xbee2ed48 } Next token is token 'a' (0xbee2ed48 'a') 0xbee2ecc0->Object::Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ed48 } 0xbee2ed48->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ecc0, 0xbee2ed48 } Shifting token 'a' (0xbee2ecc0 'a') 0x4b00e0->Object::Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ecc0 } 0xbee2ecc0->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0x4b00e0, 0xbee2ecc0 } Entering state 2 Stack now 0 11 11 11 2 0xbee2ed58->Object::Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0x4b00e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x4b00e0 'a') -> $$ = nterm item (0xbee2ed58 'a') 0x4b00e0->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0x4b00e0, 0xbee2ed58 } 0x4b00e0->Object::Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ed58 } 0xbee2ed58->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0x4b00e0, 0xbee2ed58 } Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xbee2ecc4->Object::Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0x4b00e0 } 0xbee2ed48->Object::Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0x4b00e0, 0xbee2ecc4 } 0xbee2ecc4->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0x4b00e0, 0xbee2ecc4, 0xbee2ed48 } Next token is token 'p' (0xbee2ed48 'p'Exception caught: cleaning lookahead and stack 0x4b00e0->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0x4b00e0, 0xbee2ed48 } 0x4b00d0->Object::~Object { 0x4b00b0, 0x4b00c0, 0x4b00d0, 0xbee2ed48 } 0x4b00c0->Object::~Object { 0x4b00b0, 0x4b00c0, 0xbee2ed48 } 0x4b00b0->Object::~Object { 0x4b00b0, 0xbee2ed48 } 0xbee2ed48->Object::~Object { 0xbee2ed48 } exception caught: printer end { } ./c++.at:1362: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1362: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaT stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaR stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1362: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.cc:148: /usr/include/c++/12/bits/vector.tcc: In member function 'constexpr void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'constexpr std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'constexpr void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'constexpr std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'constexpr void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19, inlined from 'virtual int yy::parser::parse()' at input.cc:2033:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:1363: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaap stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbeb83cc4->Object::Object { } 0xbeb83d48->Object::Object { 0xbeb83cc4 } 0xbeb83cc4->Object::~Object { 0xbeb83cc4, 0xbeb83d48 } Next token is token 'a' (0xbeb83d48 'a') 0xbeb83cc0->Object::Object { 0xbeb83d48 } 0xbeb83d48->Object::~Object { 0xbeb83cc0, 0xbeb83d48 } Shifting token 'a' (0xbeb83cc0 'a') 0x6f20b0->Object::Object { 0xbeb83cc0 } 0xbeb83cc0->Object::~Object { 0x6f20b0, 0xbeb83cc0 } Entering state 1 Stack now 0 1 0xbeb83d58->Object::Object { 0x6f20b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x6f20b0 'a') -> $$ = nterm item (0xbeb83d58 'a') 0x6f20b0->Object::~Object { 0x6f20b0, 0xbeb83d58 } 0x6f20b0->Object::Object { 0xbeb83d58 } 0xbeb83d58->Object::~Object { 0x6f20b0, 0xbeb83d58 } Entering state 10 Stack now 0 10 Reading a token 0xbeb83cc4->Object::Object { 0x6f20b0 } 0xbeb83d48->Object::Object { 0x6f20b0, 0xbeb83cc4 } 0xbeb83cc4->Object::~Object { 0x6f20b0, 0xbeb83cc4, 0xbeb83d48 } Next token is token 'a' (0xbeb83d48 'a') 0xbeb83cc0->Object::Object { 0x6f20b0, 0xbeb83d48 } 0xbeb83d48->Object::~Object { 0x6f20b0, 0xbeb83cc0, 0xbeb83d48 } Shifting token 'a' (0xbeb83cc0 'a') 0x6f20c0->Object::Object { 0x6f20b0, 0xbeb83cc0 } 0xbeb83cc0->Object::~Object { 0x6f20b0, 0x6f20c0, 0xbeb83cc0 } Entering state 1 Stack now 0 10 1 0xbeb83d58->Object::Object { 0x6f20b0, 0x6f20c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x6f20c0 'a') -> $$ = nterm item (0xbeb83d58 'a') 0x6f20c0->Object::~Object { 0x6f20b0, 0x6f20c0, 0xbeb83d58 } 0x6f20c0->Object::Object { 0x6f20b0, 0xbeb83d58 } 0xbeb83d58->Object::~Object { 0x6f20b0, 0x6f20c0, 0xbeb83d58 } Entering state 10 Stack now 0 10 10 Reading a token 0xbeb83cc4->Object::Object { 0x6f20b0, 0x6f20c0 } 0xbeb83d48->Object::Object { 0x6f20b0, 0x6f20c0, 0xbeb83cc4 } 0xbeb83cc4->Object::~Object { 0x6f20b0, 0x6f20c0, 0xbeb83cc4, 0xbeb83d48 } Next token is token 'a' (0xbeb83d48 'a') 0xbeb83cc0->Object::Object { 0x6f20b0, 0x6f20c0, 0xbeb83d48 } 0xbeb83d48->Object::~Object { 0x6f20b0, 0x6f20c0, 0xbeb83cc0, 0xbeb83d48 } Shifting token 'a' (0xbeb83cc0 'a') 0x6f20d0->Object::Object { 0x6f20b0, 0x6f20c0, 0xbeb83cc0 } 0xbeb83cc0->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83cc0 } Entering state 1 Stack now 0 10 10 1 0xbeb83d58->Object::Object { 0x6f20b0, 0x6f20c0, 0x6f20d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x6f20d0 'a') -> $$ = nterm item (0xbeb83d58 'a') 0x6f20d0->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83d58 } 0x6f20d0->Object::Object { 0x6f20b0, 0x6f20c0, 0xbeb83d58 } 0xbeb83d58->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83d58 } Entering state 10 Stack now 0 10 10 10 Reading a token 0xbeb83cc4->Object::Object { 0x6f20b0, 0x6f20c0, 0x6f20d0 } 0xbeb83d48->Object::Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83cc4 } 0xbeb83cc4->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83cc4, 0xbeb83d48 } Next token is token 'a' (0xbeb83d48 'a') 0xbeb83cc0->Object::Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83d48 } 0xbeb83d48->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83cc0, 0xbeb83d48 } Shifting token 'a' (0xbeb83cc0 'a') 0x6f20e0->Object::Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83cc0 } 0xbeb83cc0->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0x6f20e0, 0xbeb83cc0 } Entering state 1 Stack now 0 10 10 10 1 0xbeb83d58->Object::Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0x6f20e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x6f20e0 'a') -> $$ = nterm item (0xbeb83d58 'a') 0x6f20e0->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0x6f20e0, 0xbeb83d58 } 0x6f20e0->Object::Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83d58 } 0xbeb83d58->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0x6f20e0, 0xbeb83d58 } Entering state 10 Stack now 0 10 10 10 10 Reading a token 0xbeb83cc4->Object::Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0x6f20e0 } 0xbeb83d48->Object::Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0x6f20e0, 0xbeb83cc4 } 0xbeb83cc4->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0x6f20e0, 0xbeb83cc4, 0xbeb83d48 } Next token is token 'p' (0xbeb83d48 'p'Exception caught: cleaning lookahead and stack 0x6f20e0->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0x6f20e0, 0xbeb83d48 } 0x6f20d0->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83d48 } 0x6f20c0->Object::~Object { 0x6f20b0, 0x6f20c0, 0xbeb83d48 } 0x6f20b0->Object::~Object { 0x6f20b0, 0xbeb83d48 } 0xbeb83d48->Object::~Object { 0xbeb83d48 } exception caught: printer end { } ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbeb83cc4->Object::Object { } 0xbeb83d48->Object::Object { 0xbeb83cc4 } 0xbeb83cc4->Object::~Object { 0xbeb83cc4, 0xbeb83d48 } Next token is token 'a' (0xbeb83d48 'a') 0xbeb83cc0->Object::Object { 0xbeb83d48 } 0xbeb83d48->Object::~Object { 0xbeb83cc0, 0xbeb83d48 } Shifting token 'a' (0xbeb83cc0 'a') 0x6f20b0->Object::Object { 0xbeb83cc0 } 0xbeb83cc0->Object::~Object { 0x6f20b0, 0xbeb83cc0 } Entering state 1 Stack now 0 1 0xbeb83d58->Object::Object { 0x6f20b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x6f20b0 'a') -> $$ = nterm item (0xbeb83d58 'a') 0x6f20b0->Object::~Object { 0x6f20b0, 0xbeb83d58 } 0x6f20b0->Object::Object { 0xbeb83d58 } 0xbeb83d58->Object::~Object { 0x6f20b0, 0xbeb83d58 } Entering state 10 Stack now 0 10 Reading a token 0xbeb83cc4->Object::Object { 0x6f20b0 } 0xbeb83d48->Object::Object { 0x6f20b0, 0xbeb83cc4 } 0xbeb83cc4->Object::~Object { 0x6f20b0, 0xbeb83cc4, 0xbeb83d48 } Next token is token 'a' (0xbeb83d48 'a') 0xbeb83cc0->Object::Object { 0x6f20b0, 0xbeb83d48 } 0xbeb83d48->Object::~Object { 0x6f20b0, 0xbeb83cc0, 0xbeb83d48 } Shifting token 'a' (0xbeb83cc0 'a') 0x6f20c0->Object::Object { 0x6f20b0, 0xbeb83cc0 } 0xbeb83cc0->Object::~Object { 0x6f20b0, 0x6f20c0, 0xbeb83cc0 } Entering state 1 Stack now 0 10 1 0xbeb83d58->Object::Object { 0x6f20b0, 0x6f20c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x6f20c0 'a') -> $$ = nterm item (0xbeb83d58 'a') 0x6f20c0->Object::~Object { 0x6f20b0, 0x6f20c0, 0xbeb83d58 } 0x6f20c0->Object::Object { 0x6f20b0, 0xbeb83d58 } 0xbeb83d58->Object::~Object { 0x6f20b0, 0x6f20c0, 0xbeb83d58 } Entering state 10 Stack now 0 10 10 Reading a token 0xbeb83cc4->Object::Object { 0x6f20b0, 0x6f20c0 } 0xbeb83d48->Object::Object { 0x6f20b0, 0x6f20c0, 0xbeb83cc4 } 0xbeb83cc4->Object::~Object { 0x6f20b0, 0x6f20c0, 0xbeb83cc4, 0xbeb83d48 } Next token is token 'a' (0xbeb83d48 'a') 0xbeb83cc0->Object::Object { 0x6f20b0, 0x6f20c0, 0xbeb83d48 } 0xbeb83d48->Object::~Object { 0x6f20b0, 0x6f20c0, 0xbeb83cc0, 0xbeb83d48 } Shifting token 'a' (0xbeb83cc0 'a') 0x6f20d0->Object::Object { 0x6f20b0, 0x6f20c0, 0xbeb83cc0 } 0xbeb83cc0->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83cc0 } Entering state 1 Stack now 0 10 10 1 0xbeb83d58->Object::Object { 0x6f20b0, 0x6f20c0, 0x6f20d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x6f20d0 'a') -> $$ = nterm item (0xbeb83d58 'a') 0x6f20d0->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83d58 } 0x6f20d0->Object::Object { 0x6f20b0, 0x6f20c0, 0xbeb83d58 } 0xbeb83d58->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83d58 } Entering state 10 Stack now 0 10 10 10 Reading a token 0xbeb83cc4->Object::Object { 0x6f20b0, 0x6f20c0, 0x6f20d0 } 0xbeb83d48->Object::Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83cc4 } 0xbeb83cc4->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83cc4, 0xbeb83d48 } Next token is token 'a' (0xbeb83d48 'a') 0xbeb83cc0->Object::Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83d48 } 0xbeb83d48->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83cc0, 0xbeb83d48 } Shifting token 'a' (0xbeb83cc0 'a') 0x6f20e0->Object::Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83cc0 } 0xbeb83cc0->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0x6f20e0, 0xbeb83cc0 } Entering state 1 Stack now 0 10 10 10 1 0xbeb83d58->Object::Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0x6f20e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x6f20e0 'a') -> $$ = nterm item (0xbeb83d58 'a') 0x6f20e0->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0x6f20e0, 0xbeb83d58 } 0x6f20e0->Object::Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83d58 } 0xbeb83d58->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0x6f20e0, 0xbeb83d58 } Entering state 10 Stack now 0 10 10 10 10 Reading a token 0xbeb83cc4->Object::Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0x6f20e0 } 0xbeb83d48->Object::Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0x6f20e0, 0xbeb83cc4 } 0xbeb83cc4->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0x6f20e0, 0xbeb83cc4, 0xbeb83d48 } Next token is token 'p' (0xbeb83d48 'p'Exception caught: cleaning lookahead and stack 0x6f20e0->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0x6f20e0, 0xbeb83d48 } 0x6f20d0->Object::~Object { 0x6f20b0, 0x6f20c0, 0x6f20d0, 0xbeb83d48 } 0x6f20c0->Object::~Object { 0x6f20b0, 0x6f20c0, 0xbeb83d48 } 0x6f20b0->Object::~Object { 0x6f20b0, 0xbeb83d48 } 0xbeb83d48->Object::~Object { 0xbeb83d48 } exception caught: printer end { } ./c++.at:1363: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1363: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaT stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaR stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1363: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ======== Testing with C++ standard flags: '' ./c++.at:1411: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ======== Testing with C++ standard flags: '' ./c++.at:1411: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.cc:148: /usr/include/c++/12/bits/vector.tcc: In member function 'constexpr void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'constexpr std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'constexpr void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'constexpr std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'constexpr void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19, inlined from 'virtual int yy::parser::parse()' at input.cc:2039:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:1362: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaap stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbef25cc4->Object::Object { } 0xbef25d48->Object::Object { 0xbef25cc4 } 0xbef25cc4->Object::~Object { 0xbef25cc4, 0xbef25d48 } Next token is token 'a' (0xbef25d48 'a') 0xbef25cc0->Object::Object { 0xbef25d48 } 0xbef25d48->Object::~Object { 0xbef25cc0, 0xbef25d48 } Shifting token 'a' (0xbef25cc0 'a') 0xfa00b0->Object::Object { 0xbef25cc0 } 0xbef25cc0->Object::~Object { 0xfa00b0, 0xbef25cc0 } Entering state 2 Stack now 0 2 0xbef25d58->Object::Object { 0xfa00b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0xfa00b0 'a') -> $$ = nterm item (0xbef25d58 'a') 0xfa00b0->Object::~Object { 0xfa00b0, 0xbef25d58 } 0xfa00b0->Object::Object { 0xbef25d58 } 0xbef25d58->Object::~Object { 0xfa00b0, 0xbef25d58 } Entering state 11 Stack now 0 11 Reading a token 0xbef25cc4->Object::Object { 0xfa00b0 } 0xbef25d48->Object::Object { 0xfa00b0, 0xbef25cc4 } 0xbef25cc4->Object::~Object { 0xfa00b0, 0xbef25cc4, 0xbef25d48 } Next token is token 'a' (0xbef25d48 'a') 0xbef25cc0->Object::Object { 0xfa00b0, 0xbef25d48 } 0xbef25d48->Object::~Object { 0xfa00b0, 0xbef25cc0, 0xbef25d48 } Shifting token 'a' (0xbef25cc0 'a') 0xfa00c0->Object::Object { 0xfa00b0, 0xbef25cc0 } 0xbef25cc0->Object::~Object { 0xfa00b0, 0xfa00c0, 0xbef25cc0 } Entering state 2 Stack now 0 11 2 0xbef25d58->Object::Object { 0xfa00b0, 0xfa00c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0xfa00c0 'a') -> $$ = nterm item (0xbef25d58 'a') 0xfa00c0->Object::~Object { 0xfa00b0, 0xfa00c0, 0xbef25d58 } 0xfa00c0->Object::Object { 0xfa00b0, 0xbef25d58 } 0xbef25d58->Object::~Object { 0xfa00b0, 0xfa00c0, 0xbef25d58 } Entering state 11 Stack now 0 11 11 Reading a token 0xbef25cc4->Object::Object { 0xfa00b0, 0xfa00c0 } 0xbef25d48->Object::Object { 0xfa00b0, 0xfa00c0, 0xbef25cc4 } 0xbef25cc4->Object::~Object { 0xfa00b0, 0xfa00c0, 0xbef25cc4, 0xbef25d48 } Next token is token 'a' (0xbef25d48 'a') 0xbef25cc0->Object::Object { 0xfa00b0, 0xfa00c0, 0xbef25d48 } 0xbef25d48->Object::~Object { 0xfa00b0, 0xfa00c0, 0xbef25cc0, 0xbef25d48 } Shifting token 'a' (0xbef25cc0 'a') 0xfa00d0->Object::Object { 0xfa00b0, 0xfa00c0, 0xbef25cc0 } 0xbef25cc0->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25cc0 } Entering state 2 Stack now 0 11 11 2 0xbef25d58->Object::Object { 0xfa00b0, 0xfa00c0, 0xfa00d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0xfa00d0 'a') -> $$ = nterm item (0xbef25d58 'a') 0xfa00d0->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25d58 } 0xfa00d0->Object::Object { 0xfa00b0, 0xfa00c0, 0xbef25d58 } 0xbef25d58->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25d58 } Entering state 11 Stack now 0 11 11 11 Reading a token 0xbef25cc4->Object::Object { 0xfa00b0, 0xfa00c0, 0xfa00d0 } 0xbef25d48->Object::Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25cc4 } 0xbef25cc4->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25cc4, 0xbef25d48 } Next token is token 'a' (0xbef25d48 'a') 0xbef25cc0->Object::Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25d48 } 0xbef25d48->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25cc0, 0xbef25d48 } Shifting token 'a' (0xbef25cc0 'a') 0xfa00e0->Object::Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25cc0 } 0xbef25cc0->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xfa00e0, 0xbef25cc0 } Entering state 2 Stack now 0 11 11 11 2 0xbef25d58->Object::Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xfa00e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0xfa00e0 'a') -> $$ = nterm item (0xbef25d58 'a') 0xfa00e0->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xfa00e0, 0xbef25d58 } 0xfa00e0->Object::Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25d58 } 0xbef25d58->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xfa00e0, 0xbef25d58 } Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xbef25cc4->Object::Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xfa00e0 } 0xbef25d48->Object::Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xfa00e0, 0xbef25cc4 } 0xbef25cc4->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xfa00e0, 0xbef25cc4, 0xbef25d48 } Next token is token 'p' (0xbef25d48 'p'Exception caught: cleaning lookahead and stack 0xfa00e0->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xfa00e0, 0xbef25d48 } 0xfa00d0->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25d48 } 0xfa00c0->Object::~Object { 0xfa00b0, 0xfa00c0, 0xbef25d48 } 0xfa00b0->Object::~Object { 0xfa00b0, 0xbef25d48 } 0xbef25d48->Object::~Object { 0xbef25d48 } exception caught: printer end { } ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbef25cc4->Object::Object { } 0xbef25d48->Object::Object { 0xbef25cc4 } 0xbef25cc4->Object::~Object { 0xbef25cc4, 0xbef25d48 } Next token is token 'a' (0xbef25d48 'a') 0xbef25cc0->Object::Object { 0xbef25d48 } 0xbef25d48->Object::~Object { 0xbef25cc0, 0xbef25d48 } Shifting token 'a' (0xbef25cc0 'a') 0xfa00b0->Object::Object { 0xbef25cc0 } 0xbef25cc0->Object::~Object { 0xfa00b0, 0xbef25cc0 } Entering state 2 Stack now 0 2 0xbef25d58->Object::Object { 0xfa00b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0xfa00b0 'a') -> $$ = nterm item (0xbef25d58 'a') 0xfa00b0->Object::~Object { 0xfa00b0, 0xbef25d58 } 0xfa00b0->Object::Object { 0xbef25d58 } 0xbef25d58->Object::~Object { 0xfa00b0, 0xbef25d58 } Entering state 11 Stack now 0 11 Reading a token 0xbef25cc4->Object::Object { 0xfa00b0 } 0xbef25d48->Object::Object { 0xfa00b0, 0xbef25cc4 } 0xbef25cc4->Object::~Object { 0xfa00b0, 0xbef25cc4, 0xbef25d48 } Next token is token 'a' (0xbef25d48 'a') 0xbef25cc0->Object::Object { 0xfa00b0, 0xbef25d48 } 0xbef25d48->Object::~Object { 0xfa00b0, 0xbef25cc0, 0xbef25d48 } Shifting token 'a' (0xbef25cc0 'a') 0xfa00c0->Object::Object { 0xfa00b0, 0xbef25cc0 } 0xbef25cc0->Object::~Object { 0xfa00b0, 0xfa00c0, 0xbef25cc0 } Entering state 2 Stack now 0 11 2 0xbef25d58->Object::Object { 0xfa00b0, 0xfa00c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0xfa00c0 'a') -> $$ = nterm item (0xbef25d58 'a') 0xfa00c0->Object::~Object { 0xfa00b0, 0xfa00c0, 0xbef25d58 } 0xfa00c0->Object::Object { 0xfa00b0, 0xbef25d58 } 0xbef25d58->Object::~Object { 0xfa00b0, 0xfa00c0, 0xbef25d58 } Entering state 11 Stack now 0 11 11 Reading a token 0xbef25cc4->Object::Object { 0xfa00b0, 0xfa00c0 } 0xbef25d48->Object::Object { 0xfa00b0, 0xfa00c0, 0xbef25cc4 } 0xbef25cc4->Object::~Object { 0xfa00b0, 0xfa00c0, 0xbef25cc4, 0xbef25d48 } Next token is token 'a' (0xbef25d48 'a') 0xbef25cc0->Object::Object { 0xfa00b0, 0xfa00c0, 0xbef25d48 } 0xbef25d48->Object::~Object { 0xfa00b0, 0xfa00c0, 0xbef25cc0, 0xbef25d48 } Shifting token 'a' (0xbef25cc0 'a') 0xfa00d0->Object::Object { 0xfa00b0, 0xfa00c0, 0xbef25cc0 } 0xbef25cc0->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25cc0 } Entering state 2 Stack now 0 11 11 2 0xbef25d58->Object::Object { 0xfa00b0, 0xfa00c0, 0xfa00d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0xfa00d0 'a') -> $$ = nterm item (0xbef25d58 'a') 0xfa00d0->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25d58 } 0xfa00d0->Object::Object { 0xfa00b0, 0xfa00c0, 0xbef25d58 } 0xbef25d58->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25d58 } Entering state 11 Stack now 0 11 11 11 Reading a token 0xbef25cc4->Object::Object { 0xfa00b0, 0xfa00c0, 0xfa00d0 } 0xbef25d48->Object::Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25cc4 } 0xbef25cc4->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25cc4, 0xbef25d48 } Next token is token 'a' (0xbef25d48 'a') 0xbef25cc0->Object::Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25d48 } 0xbef25d48->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25cc0, 0xbef25d48 } Shifting token 'a' (0xbef25cc0 'a') 0xfa00e0->Object::Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25cc0 } 0xbef25cc0->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xfa00e0, 0xbef25cc0 } Entering state 2 Stack now 0 11 11 11 2 0xbef25d58->Object::Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xfa00e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0xfa00e0 'a') -> $$ = nterm item (0xbef25d58 'a') 0xfa00e0->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xfa00e0, 0xbef25d58 } 0xfa00e0->Object::Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25d58 } 0xbef25d58->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xfa00e0, 0xbef25d58 } Entering state 11 Stack now 0 11 11 11 11 Reading a token 0xbef25cc4->Object::Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xfa00e0 } 0xbef25d48->Object::Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xfa00e0, 0xbef25cc4 } 0xbef25cc4->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xfa00e0, 0xbef25cc4, 0xbef25d48 } Next token is token 'p' (0xbef25d48 'p'Exception caught: cleaning lookahead and stack 0xfa00e0->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xfa00e0, 0xbef25d48 } 0xfa00d0->Object::~Object { 0xfa00b0, 0xfa00c0, 0xfa00d0, 0xbef25d48 } 0xfa00c0->Object::~Object { 0xfa00b0, 0xfa00c0, 0xbef25d48 } 0xfa00b0->Object::~Object { 0xfa00b0, 0xbef25d48 } 0xbef25d48->Object::~Object { 0xbef25d48 } exception caught: printer end { } ./c++.at:1362: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1362: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaT stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1362: $PREPARSER ./input aaaaR stderr: ./c++.at:1362: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 687. c++.at:1362: ok 690. c++.at:1422: testing Shared locations ... ./c++.at:1456: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -o x1.cc x1.yy ./c++.at:1456: $CXX $CPPFLAGS $CXXFLAGS -Iinclude -c -o x1.o x1.cc stderr: In file included from /usr/include/c++/12/vector:70, from input.cc:148: /usr/include/c++/12/bits/vector.tcc: In member function 'constexpr void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'constexpr std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'constexpr void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'constexpr std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'constexpr void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at input.cc:1094:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at input.cc:1707:19, inlined from 'virtual int yy::parser::parse()' at input.cc:2033:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:1363: $PREPARSER ./input aaaas stderr: exception caught: reduction ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaal stderr: exception caught: yylex ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input i stderr: exception caught: initial-action ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaap stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input --debug aaaap stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbed22cc4->Object::Object { } 0xbed22d48->Object::Object { 0xbed22cc4 } 0xbed22cc4->Object::~Object { 0xbed22cc4, 0xbed22d48 } Next token is token 'a' (0xbed22d48 'a') 0xbed22cc0->Object::Object { 0xbed22d48 } 0xbed22d48->Object::~Object { 0xbed22cc0, 0xbed22d48 } Shifting token 'a' (0xbed22cc0 'a') 0x52f0b0->Object::Object { 0xbed22cc0 } 0xbed22cc0->Object::~Object { 0x52f0b0, 0xbed22cc0 } Entering state 1 Stack now 0 1 0xbed22d58->Object::Object { 0x52f0b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x52f0b0 'a') -> $$ = nterm item (0xbed22d58 'a') 0x52f0b0->Object::~Object { 0x52f0b0, 0xbed22d58 } 0x52f0b0->Object::Object { 0xbed22d58 } 0xbed22d58->Object::~Object { 0x52f0b0, 0xbed22d58 } Entering state 10 Stack now 0 10 Reading a token 0xbed22cc4->Object::Object { 0x52f0b0 } 0xbed22d48->Object::Object { 0x52f0b0, 0xbed22cc4 } 0xbed22cc4->Object::~Object { 0x52f0b0, 0xbed22cc4, 0xbed22d48 } Next token is token 'a' (0xbed22d48 'a') 0xbed22cc0->Object::Object { 0x52f0b0, 0xbed22d48 } 0xbed22d48->Object::~Object { 0x52f0b0, 0xbed22cc0, 0xbed22d48 } Shifting token 'a' (0xbed22cc0 'a') 0x52f0c0->Object::Object { 0x52f0b0, 0xbed22cc0 } 0xbed22cc0->Object::~Object { 0x52f0b0, 0x52f0c0, 0xbed22cc0 } Entering state 1 Stack now 0 10 1 0xbed22d58->Object::Object { 0x52f0b0, 0x52f0c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x52f0c0 'a') -> $$ = nterm item (0xbed22d58 'a') 0x52f0c0->Object::~Object { 0x52f0b0, 0x52f0c0, 0xbed22d58 } 0x52f0c0->Object::Object { 0x52f0b0, 0xbed22d58 } 0xbed22d58->Object::~Object { 0x52f0b0, 0x52f0c0, 0xbed22d58 } Entering state 10 Stack now 0 10 10 Reading a token 0xbed22cc4->Object::Object { 0x52f0b0, 0x52f0c0 } 0xbed22d48->Object::Object { 0x52f0b0, 0x52f0c0, 0xbed22cc4 } 0xbed22cc4->Object::~Object { 0x52f0b0, 0x52f0c0, 0xbed22cc4, 0xbed22d48 } Next token is token 'a' (0xbed22d48 'a') 0xbed22cc0->Object::Object { 0x52f0b0, 0x52f0c0, 0xbed22d48 } 0xbed22d48->Object::~Object { 0x52f0b0, 0x52f0c0, 0xbed22cc0, 0xbed22d48 } Shifting token 'a' (0xbed22cc0 'a') 0x52f0d0->Object::Object { 0x52f0b0, 0x52f0c0, 0xbed22cc0 } 0xbed22cc0->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22cc0 } Entering state 1 Stack now 0 10 10 1 0xbed22d58->Object::Object { 0x52f0b0, 0x52f0c0, 0x52f0d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x52f0d0 'a') -> $$ = nterm item (0xbed22d58 'a') 0x52f0d0->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22d58 } 0x52f0d0->Object::Object { 0x52f0b0, 0x52f0c0, 0xbed22d58 } 0xbed22d58->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22d58 } Entering state 10 Stack now 0 10 10 10 Reading a token 0xbed22cc4->Object::Object { 0x52f0b0, 0x52f0c0, 0x52f0d0 } 0xbed22d48->Object::Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22cc4 } 0xbed22cc4->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22cc4, 0xbed22d48 } Next token is token 'a' (0xbed22d48 'a') 0xbed22cc0->Object::Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22d48 } 0xbed22d48->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22cc0, 0xbed22d48 } Shifting token 'a' (0xbed22cc0 'a') 0x52f0e0->Object::Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22cc0 } 0xbed22cc0->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0x52f0e0, 0xbed22cc0 } Entering state 1 Stack now 0 10 10 10 1 0xbed22d58->Object::Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0x52f0e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x52f0e0 'a') -> $$ = nterm item (0xbed22d58 'a') 0x52f0e0->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0x52f0e0, 0xbed22d58 } 0x52f0e0->Object::Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22d58 } 0xbed22d58->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0x52f0e0, 0xbed22d58 } Entering state 10 Stack now 0 10 10 10 10 Reading a token 0xbed22cc4->Object::Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0x52f0e0 } 0xbed22d48->Object::Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0x52f0e0, 0xbed22cc4 } 0xbed22cc4->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0x52f0e0, 0xbed22cc4, 0xbed22d48 } Next token is token 'p' (0xbed22d48 'p'Exception caught: cleaning lookahead and stack 0x52f0e0->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0x52f0e0, 0xbed22d48 } 0x52f0d0->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22d48 } 0x52f0c0->Object::~Object { 0x52f0b0, 0x52f0c0, 0xbed22d48 } 0x52f0b0->Object::~Object { 0x52f0b0, 0xbed22d48 } 0xbed22d48->Object::~Object { 0xbed22d48 } exception caught: printer end { } ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Stack now 0 Reading a token 0xbed22cc4->Object::Object { } 0xbed22d48->Object::Object { 0xbed22cc4 } 0xbed22cc4->Object::~Object { 0xbed22cc4, 0xbed22d48 } Next token is token 'a' (0xbed22d48 'a') 0xbed22cc0->Object::Object { 0xbed22d48 } 0xbed22d48->Object::~Object { 0xbed22cc0, 0xbed22d48 } Shifting token 'a' (0xbed22cc0 'a') 0x52f0b0->Object::Object { 0xbed22cc0 } 0xbed22cc0->Object::~Object { 0x52f0b0, 0xbed22cc0 } Entering state 1 Stack now 0 1 0xbed22d58->Object::Object { 0x52f0b0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x52f0b0 'a') -> $$ = nterm item (0xbed22d58 'a') 0x52f0b0->Object::~Object { 0x52f0b0, 0xbed22d58 } 0x52f0b0->Object::Object { 0xbed22d58 } 0xbed22d58->Object::~Object { 0x52f0b0, 0xbed22d58 } Entering state 10 Stack now 0 10 Reading a token 0xbed22cc4->Object::Object { 0x52f0b0 } 0xbed22d48->Object::Object { 0x52f0b0, 0xbed22cc4 } 0xbed22cc4->Object::~Object { 0x52f0b0, 0xbed22cc4, 0xbed22d48 } Next token is token 'a' (0xbed22d48 'a') 0xbed22cc0->Object::Object { 0x52f0b0, 0xbed22d48 } 0xbed22d48->Object::~Object { 0x52f0b0, 0xbed22cc0, 0xbed22d48 } Shifting token 'a' (0xbed22cc0 'a') 0x52f0c0->Object::Object { 0x52f0b0, 0xbed22cc0 } 0xbed22cc0->Object::~Object { 0x52f0b0, 0x52f0c0, 0xbed22cc0 } Entering state 1 Stack now 0 10 1 0xbed22d58->Object::Object { 0x52f0b0, 0x52f0c0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x52f0c0 'a') -> $$ = nterm item (0xbed22d58 'a') 0x52f0c0->Object::~Object { 0x52f0b0, 0x52f0c0, 0xbed22d58 } 0x52f0c0->Object::Object { 0x52f0b0, 0xbed22d58 } 0xbed22d58->Object::~Object { 0x52f0b0, 0x52f0c0, 0xbed22d58 } Entering state 10 Stack now 0 10 10 Reading a token 0xbed22cc4->Object::Object { 0x52f0b0, 0x52f0c0 } 0xbed22d48->Object::Object { 0x52f0b0, 0x52f0c0, 0xbed22cc4 } 0xbed22cc4->Object::~Object { 0x52f0b0, 0x52f0c0, 0xbed22cc4, 0xbed22d48 } Next token is token 'a' (0xbed22d48 'a') 0xbed22cc0->Object::Object { 0x52f0b0, 0x52f0c0, 0xbed22d48 } 0xbed22d48->Object::~Object { 0x52f0b0, 0x52f0c0, 0xbed22cc0, 0xbed22d48 } Shifting token 'a' (0xbed22cc0 'a') 0x52f0d0->Object::Object { 0x52f0b0, 0x52f0c0, 0xbed22cc0 } 0xbed22cc0->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22cc0 } Entering state 1 Stack now 0 10 10 1 0xbed22d58->Object::Object { 0x52f0b0, 0x52f0c0, 0x52f0d0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x52f0d0 'a') -> $$ = nterm item (0xbed22d58 'a') 0x52f0d0->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22d58 } 0x52f0d0->Object::Object { 0x52f0b0, 0x52f0c0, 0xbed22d58 } 0xbed22d58->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22d58 } Entering state 10 Stack now 0 10 10 10 Reading a token 0xbed22cc4->Object::Object { 0x52f0b0, 0x52f0c0, 0x52f0d0 } 0xbed22d48->Object::Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22cc4 } 0xbed22cc4->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22cc4, 0xbed22d48 } Next token is token 'a' (0xbed22d48 'a') 0xbed22cc0->Object::Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22d48 } 0xbed22d48->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22cc0, 0xbed22d48 } Shifting token 'a' (0xbed22cc0 'a') 0x52f0e0->Object::Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22cc0 } 0xbed22cc0->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0x52f0e0, 0xbed22cc0 } Entering state 1 Stack now 0 10 10 10 1 0xbed22d58->Object::Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0x52f0e0 } Reducing stack by rule 4 (line 142): $1 = token 'a' (0x52f0e0 'a') -> $$ = nterm item (0xbed22d58 'a') 0x52f0e0->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0x52f0e0, 0xbed22d58 } 0x52f0e0->Object::Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22d58 } 0xbed22d58->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0x52f0e0, 0xbed22d58 } Entering state 10 Stack now 0 10 10 10 10 Reading a token 0xbed22cc4->Object::Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0x52f0e0 } 0xbed22d48->Object::Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0x52f0e0, 0xbed22cc4 } 0xbed22cc4->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0x52f0e0, 0xbed22cc4, 0xbed22d48 } Next token is token 'p' (0xbed22d48 'p'Exception caught: cleaning lookahead and stack 0x52f0e0->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0x52f0e0, 0xbed22d48 } 0x52f0d0->Object::~Object { 0x52f0b0, 0x52f0c0, 0x52f0d0, 0xbed22d48 } 0x52f0c0->Object::~Object { 0x52f0b0, 0x52f0c0, 0xbed22d48 } 0x52f0b0->Object::~Object { 0x52f0b0, 0xbed22d48 } 0xbed22d48->Object::~Object { 0xbed22d48 } exception caught: printer end { } ./c++.at:1363: grep '^exception caught: printer$' stderr stdout: exception caught: printer ./c++.at:1363: $PREPARSER ./input aaaae stderr: exception caught: syntax error ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaE stderr: exception caught: syntax error, unexpected end of file, expecting 'a' ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaT stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./c++.at:1363: $PREPARSER ./input aaaaR stderr: ./c++.at:1363: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 688. c++.at:1363: ok 691. c++.at:1517: testing Default action ... ======== Testing with C++ standard flags: '' ./c++.at:1555: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./c++.at:1555: ./check ./c++.at:1555: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -o test.cc test.y stderr: stdout: ======== Testing with C++ standard flags: '' ./c++.at:1411: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS ./c++.at:1555: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./c++.at:1463: sed -ne '/INCLUDED/p;/\\file/{p;n;p;}' include/ast/loc.hh ./c++.at:1471: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -o x2.cc x2.yy ./c++.at:1471: $CXX $CPPFLAGS $CXXFLAGS -Iinclude -c -o x2.o x2.cc stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:63: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:1011:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1555:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:1011:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1555:19, inlined from 'void yy::parser::yypush_(const char*, state_type, symbol_type&&)' at test.cc:1562:13: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:1011:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1555:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1851:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:1011:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1555:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1940:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:1555: $PREPARSER ./test stderr: ./c++.at:1555: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1555: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: 689. c++.at:1371: ok stderr: stdout: ./c++.at:1555: ./check -std=c++98 not supported ======== Testing with C++ standard flags: '' ./c++.at:1555: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS 692. java.at:25: testing Java invalid directives ... ./java.at:35: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret YYParser.y stderr: stdout: ./c++.at:1501: $CXX $CPPFLAGS $CXXFLAGS -Iinclude $LDFLAGS -o parser x[12].o main.cc $LIBS ./java.at:50: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -fcaret YYParser.y stderr: stdout: ./c++.at:1555: ./check -std=c++03 not supported ======== Testing with C++ standard flags: '' ./c++.at:1555: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS 692. java.at:25: ok 693. java.at:186: testing Java parser class and package names ... ./java.at:188: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated YYParser.y stderr: stdout: ./c++.at:1555: ./check -std=c++11 not supported ======== Testing with C++ standard flags: '' ./c++.at:1555: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS ./java.at:188: grep '[mb]4_' YYParser.y stdout: 693. java.at:186: skipped (java.at:188) stderr: stdout: ./c++.at:1555: ./check ./c++.at:1555: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -o test.cc test.y 694. java.at:217: testing Java parser class modifiers ... ./java.at:219: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated YYParser.y ./java.at:219: grep '[mb]4_' YYParser.y stdout: 694. java.at:217: ./c++.at:1555: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS skipped (java.at:219) 695. java.at:287: testing Java parser class extends and implements ... ./java.at:289: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated YYParser.y ./java.at:289: grep '[mb]4_' YYParser.y stdout: 695. java.at:287: skipped (java.at:289) 696. java.at:307: testing Java %parse-param and %lex-param ... ./java.at:309: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated YYParser.y ./java.at:309: grep '[mb]4_' YYParser.y stdout: 696. java.at:307: skipped (java.at:309) 697. java.at:381: testing Java throws specifications ... ./java.at:441: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated YYParser.y ./java.at:441: grep '[mb]4_' YYParser.y stdout: 697. java.at:381: skipped (java.at:441) 698. java.at:470: testing Java constructor init and init_throws ... ./java.at:475: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated YYParser.y ./java.at:475: grep '[mb]4_' YYParser.y stdout: 698. java.at:470: skipped (java.at:475) stderr: stdout: ./c++.at:1502: $PREPARSER ./parser stderr: ./c++.at:1502: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 690. c++.at:1422: ok 700. java.at:528: testing Java syntax error handling without error token ... ./java.at:579: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret YYParser.y 699. java.at:497: testing Java value, position, and location types ... ./java.at:499: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated YYParser.y 700. java.at:528: ./java.at:499: grep '[mb]4_' YYParser.y skipped (java.at:580) stdout: 699. java.at:497: skipped (java.at:499) 701. javapush.at:172: testing Trivial Push Parser with api.push-pull verification ... ./javapush.at:181: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dapi.push-pull=pull -o Main.java input.y 702. javapush.at:217: testing Trivial Push Parser with %initial-action ... ./javapush.at:227: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dapi.push-pull=push -o Main.java input.y ./javapush.at:182: grep -c '^.*public boolean parse().*$' Main.java ./javapush.at:187: grep -c '^.*public int push_parse(int yylextoken, Object yylexval).*$' Main.java ./javapush.at:191: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dapi.push-pull=both -o Main.java input.y ./javapush.at:228: grep -c '^System.err.println("Initial action invoked");$' Main.java 702. javapush.at:217: skipped (javapush.at:230) ./javapush.at:192: grep -c '^.*public boolean parse().*$' Main.java ./javapush.at:195: grep -c '^.*public int push_parse(int yylextoken, Object yylexval).*$' Main.java ./javapush.at:199: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Dapi.push-pull=push -o Main.java input.y ./javapush.at:200: grep -c '^.*public boolean parse().*$' Main.java ./javapush.at:203: grep -c '^.*public int push_parse(int yylextoken, Object yylexval).*$' Main.java 703. d.at:103: testing D parser class extends and implements ... ./d.at:106: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated YYParser.y 701. javapush.at:172: skipped (javapush.at:207) ./d.at:106: grep '[mb]4_' YYParser.y stdout: 703. d.at:103: skipped (d.at:106) 704. d.at:138: testing D parser class api.token.raw true by default ... ./d.at:141: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -Wno-deprecated YYParser.y ./d.at:141: grep '[mb]4_' YYParser.y stdout: 704. d.at:138: skipped (d.at:141) 705. cxx-type.at:409: testing GLR: Resolve ambiguity, impure, no locations ... ./cxx-type.at:410: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o types.c types.y stderr: types.y:77.8-37: warning: unset value: $$ [-Wother] types.y: warning: 1 reduce/reduce conflict [-Wconflicts-rr] types.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples ./cxx-type.at:410: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o types types.c $LIBS 706. cxx-type.at:415: testing GLR: Resolve ambiguity, impure, locations ... ./cxx-type.at:416: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o types.c types.y stderr: types.y:87.8-37: warning: unset value: $$ [-Wother] types.y: warning: 1 reduce/reduce conflict [-Wconflicts-rr] types.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples ./cxx-type.at:416: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o types types.c $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:63: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:1011:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1555:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:1011:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1555:19, inlined from 'void yy::parser::yypush_(const char*, state_type, symbol_type&&)' at test.cc:1562:13: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:1011:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1555:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1851:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:1011:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1555:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1940:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:1555: $PREPARSER ./test stderr: ./c++.at:1555: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1555: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./c++.at:1555: ./check ./c++.at:1555: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -o test.cc test.y ./c++.at:1555: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./cxx-type.at:412: $PREPARSER ./types test-input stderr: syntax error ./cxx-type.at:412: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./cxx-type.at:412: $PREPARSER ./types -p test-input stderr: Starting parse Entering state 0 Reducing stack 0 by rule 1 (line 64): -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token ';' () Shifting token ';' () Entering state 23 Reducing stack 0 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token '=' () Shifting token '=' () Entering state 22 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 29 Reading a token Next token is token ';' () Shifting token ';' () Entering state 30 Reducing stack 0 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 14 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 24 Reading a token Next token is token ';' () Reducing stack 0 by rule 10 (line 84): $1 = nterm expr () $2 = token '=' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '+' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '+' () Stack 1 dies. Removing dead stacks. On stack 0, shifting token '+' () Stack 0 now in state 15 Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Returning to deterministic operation. Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token ';' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 23 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 23 Reduced stack 1 by rule 11 (line 87); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '=' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '=' () On stack 0, shifting token '=' () Stack 0 now in state 14 On stack 1, shifting token '=' () Stack 1 now in state 22 Stack 0 Entering state 14 Reading a token Next token is token ID () Stack 1 Entering state 22 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token '+' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token '+' () On stack 0, shifting token '+' () Stack 0 now in state 15 On stack 1, shifting token '+' () Stack 1 now in state 15 Stack 0 Entering state 15 Reading a token Next token is token ID () Stack 1 Entering state 15 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 25. Stack 0 Entering state 25 Reduced stack 0 by rule 9 (line 83); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token ';' () Reduced stack 0 by rule 10 (line 84); action deferred. Now in state 8. Stack 0 Entering state 8 Next token is token ';' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 25. Stack 1 Entering state 25 Reduced stack 1 by rule 9 (line 83); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 30 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 30 Reduced stack 1 by rule 12 (line 89); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ID () Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 20 Next token is token ID () syntax error Error: popping nterm expr () Error: popping token '(' () Error: popping token TYPENAME () Shifting token error () Entering state 3 Next token is token ID () Error: discarding token ID () Reading a token Next token is token ')' () Error: discarding token ')' () Reading a token Next token is token '=' () Error: discarding token '=' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token ';' () Entering state 3 Next token is token ';' () Shifting token ';' () Entering state 10 Reducing stack 0 by rule 5 (line 76): $1 = token error () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token '@' () Shifting token '@' () Entering state 6 Reducing stack 0 by rule 6 (line 77): $1 = token '@' () Cleanup: popping nterm prog () ./cxx-type.at:412: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reducing stack 0 by rule 1 (line 64): -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token ';' () Shifting token ';' () Entering state 23 Reducing stack 0 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token '=' () Shifting token '=' () Entering state 22 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 29 Reading a token Next token is token ';' () Shifting token ';' () Entering state 30 Reducing stack 0 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 14 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 24 Reading a token Next token is token ';' () Reducing stack 0 by rule 10 (line 84): $1 = nterm expr () $2 = token '=' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '+' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '+' () Stack 1 dies. Removing dead stacks. On stack 0, shifting token '+' () Stack 0 now in state 15 Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Returning to deterministic operation. Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token ';' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 23 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 23 Reduced stack 1 by rule 11 (line 87); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '=' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '=' () On stack 0, shifting token '=' () Stack 0 now in state 14 On stack 1, shifting token '=' () Stack 1 now in state 22 Stack 0 Entering state 14 Reading a token Next token is token ID () Stack 1 Entering state 22 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token '+' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token '+' () On stack 0, shifting token '+' () Stack 0 now in state 15 On stack 1, shifting token '+' () Stack 1 now in state 15 Stack 0 Entering state 15 Reading a token Next token is token ID () Stack 1 Entering state 15 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 25. Stack 0 Entering state 25 Reduced stack 0 by rule 9 (line 83); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token ';' () Reduced stack 0 by rule 10 (line 84); action deferred. Now in state 8. Stack 0 Entering state 8 Next token is token ';' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 25. Stack 1 Entering state 25 Reduced stack 1 by rule 9 (line 83); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 30 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 30 Reduced stack 1 by rule 12 (line 89); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ID () Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 20 Next token is token ID () syntax error Error: popping nterm expr () Error: popping token '(' () Error: popping token TYPENAME () Shifting token error () Entering state 3 Next token is token ID () Error: discarding token ID () Reading a token Next token is token ')' () Error: discarding token ')' () Reading a token Next token is token '=' () Error: discarding token '=' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token ';' () Entering state 3 Next token is token ';' () Shifting token ';' () Entering state 10 Reducing stack 0 by rule 5 (line 76): $1 = token error () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token '@' () Shifting token '@' () Entering state 6 Reducing stack 0 by rule 6 (line 77): $1 = token '@' () Cleanup: popping nterm prog () 705. cxx-type.at:409: ok 707. cxx-type.at:420: testing GLR: Resolve ambiguity, pure, no locations ... ./cxx-type.at:421: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o types.c types.y stderr: stdout: ./cxx-type.at:417: $PREPARSER ./types test-input stderr: 17.5: syntax error ./cxx-type.at:417: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./cxx-type.at:417: $PREPARSER ./types -p test-input stderr: Starting parse Entering state 0 Reducing stack 0 by rule 1 (line 71): -> $$ = nterm prog (1.1: ) Entering state 1 Reading a token Next token is token ID (3.0: ) Shifting token ID (3.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (3.0: ) -> $$ = nterm expr (3.0: ) Entering state 8 Reading a token Next token is token '+' (3.2: ) Shifting token '+' (3.2: ) Entering state 15 Reading a token Next token is token ID (3.4: ) Shifting token ID (3.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (3.4: ) -> $$ = nterm expr (3.4: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (3.0: ) $2 = token '+' (3.2: ) $3 = nterm expr (3.4: ) -> $$ = nterm expr (3.0-4: ) Entering state 8 Reading a token Next token is token ';' (3.5: ) Shifting token ';' (3.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (3.0-4: ) $2 = token ';' (3.5: ) -> $$ = nterm stmt (3.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1: ) $2 = nterm stmt (3.0-5: ) -> $$ = nterm prog (1.1-3.5: ) Entering state 1 Reading a token Next token is token TYPENAME (5.0: ) Shifting token TYPENAME (5.0: ) Entering state 4 Reading a token Next token is token ID (5.2: ) Shifting token ID (5.2: ) Entering state 11 Reducing stack 0 by rule 13 (line 104): $1 = token ID (5.2: ) -> $$ = nterm declarator (5.2: ) Entering state 13 Reading a token Next token is token ';' (5.3: ) Shifting token ';' (5.3: ) Entering state 23 Reducing stack 0 by rule 11 (line 97): $1 = token TYPENAME (5.0: ) $2 = nterm declarator (5.2: ) $3 = token ';' (5.3: ) -> $$ = nterm decl (5.0-3: ) Entering state 9 Reducing stack 0 by rule 4 (line 85): $1 = nterm decl (5.0-3: ) -> $$ = nterm stmt (5.0-3: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-3.5: ) $2 = nterm stmt (5.0-3: ) -> $$ = nterm prog (1.1-5.3: ) Entering state 1 Reading a token Next token is token TYPENAME (7.0: ) Shifting token TYPENAME (7.0: ) Entering state 4 Reading a token Next token is token ID (7.2: ) Shifting token ID (7.2: ) Entering state 11 Reducing stack 0 by rule 13 (line 104): $1 = token ID (7.2: ) -> $$ = nterm declarator (7.2: ) Entering state 13 Reading a token Next token is token '=' (7.4: ) Shifting token '=' (7.4: ) Entering state 22 Reading a token Next token is token ID (7.6: ) Shifting token ID (7.6: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (7.6: ) -> $$ = nterm expr (7.6: ) Entering state 29 Reading a token Next token is token ';' (7.7: ) Shifting token ';' (7.7: ) Entering state 30 Reducing stack 0 by rule 12 (line 99): $1 = token TYPENAME (7.0: ) $2 = nterm declarator (7.2: ) $3 = token '=' (7.4: ) $4 = nterm expr (7.6: ) $5 = token ';' (7.7: ) -> $$ = nterm decl (7.0-7: ) Entering state 9 Reducing stack 0 by rule 4 (line 85): $1 = nterm decl (7.0-7: ) -> $$ = nterm stmt (7.0-7: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-5.3: ) $2 = nterm stmt (7.0-7: ) -> $$ = nterm prog (1.1-7.7: ) Entering state 1 Reading a token Next token is token ID (9.0: ) Shifting token ID (9.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (9.0: ) -> $$ = nterm expr (9.0: ) Entering state 8 Reading a token Next token is token '=' (9.2: ) Shifting token '=' (9.2: ) Entering state 14 Reading a token Next token is token ID (9.4: ) Shifting token ID (9.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (9.4: ) -> $$ = nterm expr (9.4: ) Entering state 24 Reading a token Next token is token ';' (9.5: ) Reducing stack 0 by rule 10 (line 94): $1 = nterm expr (9.0: ) $2 = token '=' (9.2: ) $3 = nterm expr (9.4: ) -> $$ = nterm expr (9.0-4: ) Entering state 8 Next token is token ';' (9.5: ) Shifting token ';' (9.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (9.0-4: ) $2 = token ';' (9.5: ) -> $$ = nterm stmt (9.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-7.7: ) $2 = nterm stmt (9.0-5: ) -> $$ = nterm prog (1.1-9.5: ) Entering state 1 Reading a token Next token is token TYPENAME (11.0: ) Shifting token TYPENAME (11.0: ) Entering state 4 Reading a token Next token is token '(' (11.2: ) Shifting token '(' (11.2: ) Entering state 12 Reading a token Next token is token ID (11.3: ) Shifting token ID (11.3: ) Entering state 18 Reading a token Next token is token ')' (11.4: ) Stack 0 Entering state 18 Next token is token ')' (11.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (11.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (11.4: ) Stack 1 Entering state 21 Next token is token ')' (11.4: ) On stack 0, shifting token ')' (11.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (11.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '+' (11.6: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '+' (11.6: ) Stack 1 dies. Removing dead stacks. On stack 0, shifting token '+' (11.6: ) Stack 0 now in state 15 Reducing stack -1 by rule 7 (line 90): $1 = token ID (11.3: ) -> $$ = nterm expr (11.3: ) Reducing stack -1 by rule 8 (line 91): $1 = token TYPENAME (11.0: ) $2 = token '(' (11.2: ) $3 = nterm expr (11.3: ) $4 = token ')' (11.4: ) -> $$ = nterm expr (11.0-4: ) Returning to deterministic operation. Entering state 15 Reading a token Next token is token ID (11.8: ) Shifting token ID (11.8: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (11.8: ) -> $$ = nterm expr (11.8: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (11.0-4: ) $2 = token '+' (11.6: ) $3 = nterm expr (11.8: ) -> $$ = nterm expr (11.0-8: ) Entering state 8 Reading a token Next token is token ';' (11.9: ) Shifting token ';' (11.9: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (11.0-8: ) $2 = token ';' (11.9: ) -> $$ = nterm stmt (11.0-9: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-9.5: ) $2 = nterm stmt (11.0-9: ) -> $$ = nterm prog (1.1-11.9: ) Entering state 1 Reading a token Next token is token TYPENAME (13.0: ) Shifting token TYPENAME (13.0: ) Entering state 4 Reading a token Next token is token '(' (13.2: ) Shifting token '(' (13.2: ) Entering state 12 Reading a token Next token is token ID (13.3: ) Shifting token ID (13.3: ) Entering state 18 Reading a token Next token is token ')' (13.4: ) Stack 0 Entering state 18 Next token is token ')' (13.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (13.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (13.4: ) Stack 1 Entering state 21 Next token is token ')' (13.4: ) On stack 0, shifting token ')' (13.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (13.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token ';' (13.5: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token ';' (13.5: ) On stack 0, shifting token ';' (13.5: ) Stack 0 now in state 16 On stack 1, shifting token ';' (13.5: ) Stack 1 now in state 23 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 84); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 72); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME (15.0: ) Stack 1 Entering state 23 Reduced stack 1 by rule 11 (line 97); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 85); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 72); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME (15.0: ) Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 104): $1 = token ID (13.3: ) -> $$ = nterm declarator (13.3: ) Reducing stack -1 by rule 14 (line 105): $1 = token '(' (13.2: ) $2 = nterm declarator (13.3: ) $3 = token ')' (13.4: ) -> $$ = nterm declarator (13.2-4: ) Reducing stack -1 by rule 11 (line 97): $1 = token TYPENAME (13.0: ) $2 = nterm declarator (13.2-4: ) $3 = token ';' (13.5: ) -> $$ = nterm decl (13.0-5: ) Reducing stack -1 by rule 4 (line 85): $1 = nterm decl (13.0-5: ) -> $$ = nterm stmt (13.0-5: ) Reducing stack -1 by rule 2 (line 72): $1 = nterm prog (1.1-11.9: ) $2 = nterm stmt (13.0-5: ) -> $$ = nterm prog (1.1-13.5: ) Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' (15.2: ) Shifting token '(' (15.2: ) Entering state 12 Reading a token Next token is token ID (15.3: ) Shifting token ID (15.3: ) Entering state 18 Reading a token Next token is token ')' (15.4: ) Stack 0 Entering state 18 Next token is token ')' (15.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (15.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (15.4: ) Stack 1 Entering state 21 Next token is token ')' (15.4: ) On stack 0, shifting token ')' (15.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (15.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '=' (15.6: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '=' (15.6: ) On stack 0, shifting token '=' (15.6: ) Stack 0 now in state 14 On stack 1, shifting token '=' (15.6: ) Stack 1 now in state 22 Stack 0 Entering state 14 Reading a token Next token is token ID (15.8: ) Stack 1 Entering state 22 Next token is token ID (15.8: ) On stack 0, shifting token ID (15.8: ) Stack 0 now in state 5 On stack 1, shifting token ID (15.8: ) Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token '+' (15.10: ) Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 90); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token '+' (15.10: ) On stack 0, shifting token '+' (15.10: ) Stack 0 now in state 15 On stack 1, shifting token '+' (15.10: ) Stack 1 now in state 15 Stack 0 Entering state 15 Reading a token Next token is token ID (15.12: ) Stack 1 Entering state 15 Next token is token ID (15.12: ) On stack 0, shifting token ID (15.12: ) Stack 0 now in state 5 On stack 1, shifting token ID (15.12: ) Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 25. Stack 0 Entering state 25 Reduced stack 0 by rule 9 (line 93); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token ';' (15.13: ) Reduced stack 0 by rule 10 (line 94); action deferred. Now in state 8. Stack 0 Entering state 8 Next token is token ';' (15.13: ) Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 90); action deferred. Now in state 25. Stack 1 Entering state 25 Reduced stack 1 by rule 9 (line 93); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token ';' (15.13: ) On stack 0, shifting token ';' (15.13: ) Stack 0 now in state 16 On stack 1, shifting token ';' (15.13: ) Stack 1 now in state 30 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 84); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 72); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME (17.0: ) Stack 1 Entering state 30 Reduced stack 1 by rule 12 (line 99); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 85); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 72); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME (17.0: ) Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 104): $1 = token ID (15.3: ) -> $$ = nterm declarator (15.3: ) Reducing stack -1 by rule 14 (line 105): $1 = token '(' (15.2: ) $2 = nterm declarator (15.3: ) $3 = token ')' (15.4: ) -> $$ = nterm declarator (15.2-4: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.8: ) -> $$ = nterm expr (15.8: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.12: ) -> $$ = nterm expr (15.12: ) Reducing stack -1 by rule 9 (line 93): $1 = nterm expr (15.8: ) $2 = token '+' (15.10: ) $3 = nterm expr (15.12: ) -> $$ = nterm expr (15.8-12: ) Reducing stack -1 by rule 12 (line 99): $1 = token TYPENAME (15.0: ) $2 = nterm declarator (15.2-4: ) $3 = token '=' (15.6: ) $4 = nterm expr (15.8-12: ) $5 = token ';' (15.13: ) -> $$ = nterm decl (15.0-13: ) Reducing stack -1 by rule 4 (line 85): $1 = nterm decl (15.0-13: ) -> $$ = nterm stmt (15.0-13: ) Reducing stack -1 by rule 2 (line 72): $1 = nterm prog (1.1-13.5: ) $2 = nterm stmt (15.0-13: ) -> $$ = nterm prog (1.1-15.13: ) Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' (17.2: ) Shifting token '(' (17.2: ) Entering state 12 Reading a token Next token is token ID (17.3: ) Shifting token ID (17.3: ) Entering state 18 Reading a token Next token is token ID (17.5: ) Reducing stack 0 by rule 7 (line 90): $1 = token ID (17.3: ) -> $$ = nterm expr (17.3: ) Entering state 20 Next token is token ID (17.5: ) 17.5: syntax error Error: popping nterm expr (17.3: ) Error: popping token '(' (17.2: ) Error: popping token TYPENAME (17.0: ) Shifting token error (17.0-5: ) Entering state 3 Next token is token ID (17.5: ) Error: discarding token ID (17.5: ) Reading a token Next token is token ')' (17.6: ) Error: discarding token ')' (17.6: ) Reading a token Next token is token '=' (17.8: ) Error: discarding token '=' (17.8: ) Reading a token Next token is token ID (17.10: ) Error: discarding token ID (17.10: ) Reading a token Next token is token '+' (17.12: ) Error: discarding token '+' (17.12: ) Reading a token Next token is token ID (17.14: ) Error: discarding token ID (17.14: ) Reading a token Next token is token ';' (17.15: ) Entering state 3 Next token is token ';' (17.15: ) Shifting token ';' (17.15: ) Entering state 10 Reducing stack 0 by rule 5 (line 86): $1 = token error (17.0-14: ) $2 = token ';' (17.15: ) -> $$ = nterm stmt (17.0-15: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-15.13: ) $2 = nterm stmt (17.0-15: ) -> $$ = nterm prog (1.1-17.15: ) Entering state 1 Reading a token Next token is token ID (19.0: ) Shifting token ID (19.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (19.0: ) -> $$ = nterm expr (19.0: ) Entering state 8 Reading a token Next token is token '+' (19.2: ) Shifting token '+' (19.2: ) Entering state 15 Reading a token Next token is token ID (19.4: ) Shifting token ID (19.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (19.4: ) -> $$ = nterm expr (19.4: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (19.0: ) $2 = token '+' (19.2: ) $3 = nterm expr (19.4: ) -> $$ = nterm expr (19.0-4: ) Entering state 8 Reading a token Next token is token ';' (19.5: ) Shifting token ';' (19.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (19.0-4: ) $2 = token ';' (19.5: ) -> $$ = nterm stmt (19.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-17.15: ) $2 = nterm stmt (19.0-5: ) -> $$ = nterm prog (1.1-19.5: ) Entering state 1 Reading a token Next token is token '@' (21.0: ) Shifting token '@' (21.0: ) Entering state 6 Reducing stack 0 by rule 6 (line 87): $1 = token '@' (21.0: ) Cleanup: popping nterm prog (1.1-19.5: ) ./cxx-type.at:417: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reducing stack 0 by rule 1 (line 71): -> $$ = nterm prog (1.1: ) Entering state 1 Reading a token Next token is token ID (3.0: ) Shifting token ID (3.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (3.0: ) -> $$ = nterm expr (3.0: ) Entering state 8 Reading a token Next token is token '+' (3.2: ) Shifting token '+' (3.2: ) Entering state 15 Reading a token Next token is token ID (3.4: ) Shifting token ID (3.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (3.4: ) -> $$ = nterm expr (3.4: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (3.0: ) $2 = token '+' (3.2: ) $3 = nterm expr (3.4: ) -> $$ = nterm expr (3.0-4: ) Entering state 8 Reading a token Next token is token ';' (3.5: ) Shifting token ';' (3.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (3.0-4: ) $2 = token ';' (3.5: ) -> $$ = nterm stmt (3.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1: ) $2 = nterm stmt (3.0-5: ) -> $$ = nterm prog (1.1-3.5: ) Entering state 1 Reading a token Next token is token TYPENAME (5.0: ) Shifting token TYPENAME (5.0: ) Entering state 4 Reading a token Next token is token ID (5.2: ) Shifting token ID (5.2: ) Entering state 11 Reducing stack 0 by rule 13 (line 104): $1 = token ID (5.2: ) -> $$ = nterm declarator (5.2: ) Entering state 13 Reading a token Next token is token ';' (5.3: ) Shifting token ';' (5.3: ) Entering state 23 Reducing stack 0 by rule 11 (line 97): $1 = token TYPENAME (5.0: ) $2 = nterm declarator (5.2: ) $3 = token ';' (5.3: ) -> $$ = nterm decl (5.0-3: ) Entering state 9 Reducing stack 0 by rule 4 (line 85): $1 = nterm decl (5.0-3: ) -> $$ = nterm stmt (5.0-3: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-3.5: ) $2 = nterm stmt (5.0-3: ) -> $$ = nterm prog (1.1-5.3: ) Entering state 1 Reading a token Next token is token TYPENAME (7.0: ) Shifting token TYPENAME (7.0: ) Entering state 4 Reading a token Next token is token ID (7.2: ) Shifting token ID (7.2: ) Entering state 11 Reducing stack 0 by rule 13 (line 104): $1 = token ID (7.2: ) -> $$ = nterm declarator (7.2: ) Entering state 13 Reading a token Next token is token '=' (7.4: ) Shifting token '=' (7.4: ) Entering state 22 Reading a token Next token is token ID (7.6: ) Shifting token ID (7.6: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (7.6: ) -> $$ = nterm expr (7.6: ) Entering state 29 Reading a token Next token is token ';' (7.7: ) Shifting token ';' (7.7: ) Entering state 30 Reducing stack 0 by rule 12 (line 99): $1 = token TYPENAME (7.0: ) $2 = nterm declarator (7.2: ) $3 = token '=' (7.4: ) $4 = nterm expr (7.6: ) $5 = token ';' (7.7: ) -> $$ = nterm decl (7.0-7: ) Entering state 9 Reducing stack 0 by rule 4 (line 85): $1 = nterm decl (7.0-7: ) -> $$ = nterm stmt (7.0-7: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-5.3: ) $2 = nterm stmt (7.0-7: ) -> $$ = nterm prog (1.1-7.7: ) Entering state 1 Reading a token Next token is token ID (9.0: ) Shifting token ID (9.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (9.0: ) -> $$ = nterm expr (9.0: ) Entering state 8 Reading a token Next token is token '=' (9.2: ) Shifting token '=' (9.2: ) Entering state 14 Reading a token Next token is token ID (9.4: ) Shifting token ID (9.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (9.4: ) -> $$ = nterm expr (9.4: ) Entering state 24 Reading a token Next token is token ';' (9.5: ) Reducing stack 0 by rule 10 (line 94): $1 = nterm expr (9.0: ) $2 = token '=' (9.2: ) $3 = nterm expr (9.4: ) -> $$ = nterm expr (9.0-4: ) Entering state 8 Next token is token ';' (9.5: ) Shifting token ';' (9.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (9.0-4: ) $2 = token ';' (9.5: ) -> $$ = nterm stmt (9.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-7.7: ) $2 = nterm stmt (9.0-5: ) -> $$ = nterm prog (1.1-9.5: ) Entering state 1 Reading a token Next token is token TYPENAME (11.0: ) Shifting token TYPENAME (11.0: ) Entering state 4 Reading a token Next token is token '(' (11.2: ) Shifting token '(' (11.2: ) Entering state 12 Reading a token Next token is token ID (11.3: ) Shifting token ID (11.3: ) Entering state 18 Reading a token Next token is token ')' (11.4: ) Stack 0 Entering state 18 Next token is token ')' (11.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (11.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (11.4: ) Stack 1 Entering state 21 Next token is token ')' (11.4: ) On stack 0, shifting token ')' (11.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (11.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '+' (11.6: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '+' (11.6: ) Stack 1 dies. Removing dead stacks. On stack 0, shifting token '+' (11.6: ) Stack 0 now in state 15 Reducing stack -1 by rule 7 (line 90): $1 = token ID (11.3: ) -> $$ = nterm expr (11.3: ) Reducing stack -1 by rule 8 (line 91): $1 = token TYPENAME (11.0: ) $2 = token '(' (11.2: ) $3 = nterm expr (11.3: ) $4 = token ')' (11.4: ) -> $$ = nterm expr (11.0-4: ) Returning to deterministic operation. Entering state 15 Reading a token Next token is token ID (11.8: ) Shifting token ID (11.8: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (11.8: ) -> $$ = nterm expr (11.8: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (11.0-4: ) $2 = token '+' (11.6: ) $3 = nterm expr (11.8: ) -> $$ = nterm expr (11.0-8: ) Entering state 8 Reading a token Next token is token ';' (11.9: ) Shifting token ';' (11.9: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (11.0-8: ) $2 = token ';' (11.9: ) -> $$ = nterm stmt (11.0-9: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-9.5: ) $2 = nterm stmt (11.0-9: ) -> $$ = nterm prog (1.1-11.9: ) Entering state 1 Reading a token Next token is token TYPENAME (13.0: ) Shifting token TYPENAME (13.0: ) Entering state 4 Reading a token Next token is token '(' (13.2: ) Shifting token '(' (13.2: ) Entering state 12 Reading a token Next token is token ID (13.3: ) Shifting token ID (13.3: ) Entering state 18 Reading a token Next token is token ')' (13.4: ) Stack 0 Entering state 18 Next token is token ')' (13.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (13.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (13.4: ) Stack 1 Entering state 21 Next token is token ')' (13.4: ) On stack 0, shifting token ')' (13.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (13.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token ';' (13.5: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token ';' (13.5: ) On stack 0, shifting token ';' (13.5: ) Stack 0 now in state 16 On stack 1, shifting token ';' (13.5: ) Stack 1 now in state 23 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 84); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 72); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME (15.0: ) Stack 1 Entering state 23 Reduced stack 1 by rule 11 (line 97); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 85); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 72); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME (15.0: ) Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 104): $1 = token ID (13.3: ) -> $$ = nterm declarator (13.3: ) Reducing stack -1 by rule 14 (line 105): $1 = token '(' (13.2: ) $2 = nterm declarator (13.3: ) $3 = token ')' (13.4: ) -> $$ = nterm declarator (13.2-4: ) Reducing stack -1 by rule 11 (line 97): $1 = token TYPENAME (13.0: ) $2 = nterm declarator (13.2-4: ) $3 = token ';' (13.5: ) -> $$ = nterm decl (13.0-5: ) Reducing stack -1 by rule 4 (line 85): $1 = nterm decl (13.0-5: ) -> $$ = nterm stmt (13.0-5: ) Reducing stack -1 by rule 2 (line 72): $1 = nterm prog (1.1-11.9: ) $2 = nterm stmt (13.0-5: ) -> $$ = nterm prog (1.1-13.5: ) Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' (15.2: ) Shifting token '(' (15.2: ) Entering state 12 Reading a token Next token is token ID (15.3: ) Shifting token ID (15.3: ) Entering state 18 Reading a token Next token is token ')' (15.4: ) Stack 0 Entering state 18 Next token is token ')' (15.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (15.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (15.4: ) Stack 1 Entering state 21 Next token is token ')' (15.4: ) On stack 0, shifting token ')' (15.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (15.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '=' (15.6: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '=' (15.6: ) On stack 0, shifting token '=' (15.6: ) Stack 0 now in state 14 On stack 1, shifting token '=' (15.6: ) Stack 1 now in state 22 Stack 0 Entering state 14 Reading a token Next token is token ID (15.8: ) Stack 1 Entering state 22 Next token is token ID (15.8: ) On stack 0, shifting token ID (15.8: ) Stack 0 now in state 5 On stack 1, shifting token ID (15.8: ) Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token '+' (15.10: ) Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 90); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token '+' (15.10: ) On stack 0, shifting token '+' (15.10: ) Stack 0 now in state 15 On stack 1, shifting token '+' (15.10: ) Stack 1 now in state 15 Stack 0 Entering state 15 Reading a token Next token is token ID (15.12: ) Stack 1 Entering state 15 Next token is token ID (15.12: ) On stack 0, shifting token ID (15.12: ) Stack 0 now in state 5 On stack 1, shifting token ID (15.12: ) Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 25. Stack 0 Entering state 25 Reduced stack 0 by rule 9 (line 93); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token ';' (15.13: ) Reduced stack 0 by rule 10 (line 94); action deferred. Now in state 8. Stack 0 Entering state 8 Next token is token ';' (15.13: ) Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 90); action deferred. Now in state 25. Stack 1 Entering state 25 Reduced stack 1 by rule 9 (line 93); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token ';' (15.13: ) On stack 0, shifting token ';' (15.13: ) Stack 0 now in state 16 On stack 1, shifting token ';' (15.13: ) Stack 1 now in state 30 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 84); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 72); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME (17.0: ) Stack 1 Entering state 30 Reduced stack 1 by rule 12 (line 99); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 85); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 72); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME (17.0: ) Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 104): $1 = token ID (15.3: ) -> $$ = nterm declarator (15.3: ) Reducing stack -1 by rule 14 (line 105): $1 = token '(' (15.2: ) $2 = nterm declarator (15.3: ) $3 = token ')' (15.4: ) -> $$ = nterm declarator (15.2-4: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.8: ) -> $$ = nterm expr (15.8: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.12: ) -> $$ = nterm expr (15.12: ) Reducing stack -1 by rule 9 (line 93): $1 = nterm expr (15.8: ) $2 = token '+' (15.10: ) $3 = nterm expr (15.12: ) -> $$ = nterm expr (15.8-12: ) Reducing stack -1 by rule 12 (line 99): $1 = token TYPENAME (15.0: ) $2 = nterm declarator (15.2-4: ) $3 = token '=' (15.6: ) $4 = nterm expr (15.8-12: ) $5 = token ';' (15.13: ) -> $$ = nterm decl (15.0-13: ) Reducing stack -1 by rule 4 (line 85): $1 = nterm decl (15.0-13: ) -> $$ = nterm stmt (15.0-13: ) Reducing stack -1 by rule 2 (line 72): $1 = nterm prog (1.1-13.5: ) $2 = nterm stmt (15.0-13: ) -> $$ = nterm prog (1.1-15.13: ) Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' (17.2: ) Shifting token '(' (17.2: ) Entering state 12 Reading a token Next token is token ID (17.3: ) Shifting token ID (17.3: ) Entering state 18 Reading a token Next token is token ID (17.5: ) Reducing stack 0 by rule 7 (line 90): $1 = token ID (17.3: ) -> $$ = nterm expr (17.3: ) Entering state 20 Next token is token ID (17.5: ) 17.5: syntax error Error: popping nterm expr (17.3: ) Error: popping token '(' (17.2: ) Error: popping token TYPENAME (17.0: ) Shifting token error (17.0-5: ) Entering state 3 Next token is token ID (17.5: ) Error: discarding token ID (17.5: ) Reading a token Next token is token ')' (17.6: ) Error: discarding token ')' (17.6: ) Reading a token Next token is token '=' (17.8: ) Error: discarding token '=' (17.8: ) Reading a token Next token is token ID (17.10: ) Error: discarding token ID (17.10: ) Reading a token Next token is token '+' (17.12: ) Error: discarding token '+' (17.12: ) Reading a token Next token is token ID (17.14: ) Error: discarding token ID (17.14: ) Reading a token Next token is token ';' (17.15: ) Entering state 3 Next token is token ';' (17.15: ) Shifting token ';' (17.15: ) Entering state 10 Reducing stack 0 by rule 5 (line 86): $1 = token error (17.0-14: ) $2 = token ';' (17.15: ) -> $$ = nterm stmt (17.0-15: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-15.13: ) $2 = nterm stmt (17.0-15: ) -> $$ = nterm prog (1.1-17.15: ) Entering state 1 Reading a token Next token is token ID (19.0: ) Shifting token ID (19.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (19.0: ) -> $$ = nterm expr (19.0: ) Entering state 8 Reading a token Next token is token '+' (19.2: ) Shifting token '+' (19.2: ) Entering state 15 Reading a token Next token is token ID (19.4: ) Shifting token ID (19.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (19.4: ) -> $$ = nterm expr (19.4: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (19.0: ) $2 = token '+' (19.2: ) $3 = nterm expr (19.4: ) -> $$ = nterm expr (19.0-4: ) Entering state 8 Reading a token Next token is token ';' (19.5: ) Shifting token ';' (19.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (19.0-4: ) $2 = token ';' (19.5: ) -> $$ = nterm stmt (19.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-17.15: ) $2 = nterm stmt (19.0-5: ) -> $$ = nterm prog (1.1-19.5: ) Entering state 1 Reading a token Next token is token '@' (21.0: ) Shifting token '@' (21.0: ) Entering state 6 Reducing stack 0 by rule 6 (line 87): $1 = token '@' (21.0: ) Cleanup: popping nterm prog (1.1-19.5: ) 706. cxx-type.at:415: ok stderr: types.y:77.8-37: warning: unset value: $$ [-Wother] types.y: warning: 1 reduce/reduce conflict [-Wconflicts-rr] types.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples ./cxx-type.at:421: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o types types.c $LIBS 708. cxx-type.at:426: testing GLR: Resolve ambiguity, pure, locations ... ./cxx-type.at:427: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o types.c types.y stderr: types.y:87.8-37: warning: unset value: $$ [-Wother] types.y: warning: 1 reduce/reduce conflict [-Wconflicts-rr] types.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples ./cxx-type.at:427: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o types types.c $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from test.cc:63: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector >::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:1011:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1555:19: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:1011:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1555:19, inlined from 'void yy::parser::yypush_(const char*, state_type, symbol_type&&)' at test.cc:1562:13: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:1011:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1555:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1851:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'std::vector<_Tp, _Alloc>::reference std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {yy::parser::stack_symbol_type}; _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = yy::parser::stack_symbol_type; _Alloc = std::allocator]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'void yy::parser::stack::push(T&&) [with T = yy::parser::stack_symbol_type; S = std::vector >]' at test.cc:1011:24, inlined from 'void yy::parser::yypush_(const char*, stack_symbol_type&&)' at test.cc:1555:19, inlined from 'virtual int yy::parser::parse()' at test.cc:1940:15: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator > >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./c++.at:1555: $PREPARSER ./test stderr: ./c++.at:1555: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1555: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./cxx-type.at:423: $PREPARSER ./types test-input stderr: syntax error ./cxx-type.at:423: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./cxx-type.at:423: $PREPARSER ./types -p test-input stderr: Starting parse Entering state 0 Reducing stack 0 by rule 1 (line 64): -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token ';' () Shifting token ';' () Entering state 23 Reducing stack 0 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token '=' () Shifting token '=' () Entering state 22 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 29 Reading a token Next token is token ';' () Shifting token ';' () Entering state 30 Reducing stack 0 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 14 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 24 Reading a token Next token is token ';' () Reducing stack 0 by rule 10 (line 84): $1 = nterm expr () $2 = token '=' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '+' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '+' () Stack 1 dies. Removing dead stacks. On stack 0, shifting token '+' () Stack 0 now in state 15 Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Returning to deterministic operation. Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token ';' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 23 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 23 Reduced stack 1 by rule 11 (line 87); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '=' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '=' () On stack 0, shifting token '=' () Stack 0 now in state 14 On stack 1, shifting token '=' () Stack 1 now in state 22 Stack 0 Entering state 14 Reading a token Next token is token ID () Stack 1 Entering state 22 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token '+' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token '+' () On stack 0, shifting token '+' () Stack 0 now in state 15 On stack 1, shifting token '+' () Stack 1 now in state 15 Stack 0 Entering state 15 Reading a token Next token is token ID () Stack 1 Entering state 15 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 25. Stack 0 Entering state 25 Reduced stack 0 by rule 9 (line 83); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token ';' () Reduced stack 0 by rule 10 (line 84); action deferred. Now in state 8. Stack 0 Entering state 8 Next token is token ';' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 25. Stack 1 Entering state 25 Reduced stack 1 by rule 9 (line 83); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 30 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 30 Reduced stack 1 by rule 12 (line 89); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ID () Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 20 Next token is token ID () syntax error Error: popping nterm expr () Error: popping token '(' () Error: popping token TYPENAME () Shifting token error () Entering state 3 Next token is token ID () Error: discarding token ID () Reading a token Next token is token ')' () Error: discarding token ')' () Reading a token Next token is token '=' () Error: discarding token '=' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token ';' () Entering state 3 Next token is token ';' () Shifting token ';' () Entering state 10 Reducing stack 0 by rule 5 (line 76): $1 = token error () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token '@' () Shifting token '@' () Entering state 6 Reducing stack 0 by rule 6 (line 77): $1 = token '@' () Cleanup: popping nterm prog () ./cxx-type.at:423: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reducing stack 0 by rule 1 (line 64): -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token ';' () Shifting token ';' () Entering state 23 Reducing stack 0 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token '=' () Shifting token '=' () Entering state 22 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 29 Reading a token Next token is token ';' () Shifting token ';' () Entering state 30 Reducing stack 0 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 14 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 24 Reading a token Next token is token ';' () Reducing stack 0 by rule 10 (line 84): $1 = nterm expr () $2 = token '=' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '+' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '+' () Stack 1 dies. Removing dead stacks. On stack 0, shifting token '+' () Stack 0 now in state 15 Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Returning to deterministic operation. Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token ';' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 23 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 23 Reduced stack 1 by rule 11 (line 87); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '=' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '=' () On stack 0, shifting token '=' () Stack 0 now in state 14 On stack 1, shifting token '=' () Stack 1 now in state 22 Stack 0 Entering state 14 Reading a token Next token is token ID () Stack 1 Entering state 22 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token '+' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token '+' () On stack 0, shifting token '+' () Stack 0 now in state 15 On stack 1, shifting token '+' () Stack 1 now in state 15 Stack 0 Entering state 15 Reading a token Next token is token ID () Stack 1 Entering state 15 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 25. Stack 0 Entering state 25 Reduced stack 0 by rule 9 (line 83); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token ';' () Reduced stack 0 by rule 10 (line 84); action deferred. Now in state 8. Stack 0 Entering state 8 Next token is token ';' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 25. Stack 1 Entering state 25 Reduced stack 1 by rule 9 (line 83); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 30 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 30 Reduced stack 1 by rule 12 (line 89); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ID () Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 20 Next token is token ID () syntax error Error: popping nterm expr () Error: popping token '(' () Error: popping token TYPENAME () Shifting token error () Entering state 3 Next token is token ID () Error: discarding token ID () Reading a token Next token is token ')' () Error: discarding token ')' () Reading a token Next token is token '=' () Error: discarding token '=' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token ';' () Entering state 3 Next token is token ';' () Shifting token ';' () Entering state 10 Reducing stack 0 by rule 5 (line 76): $1 = token error () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token '@' () Shifting token '@' () Entering state 6 Reducing stack 0 by rule 6 (line 77): $1 = token '@' () Cleanup: popping nterm prog () 707. cxx-type.at:420: ok stderr: stdout: ./c++.at:1555: ./check ./c++.at:1555: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -o test.cc test.y 709. cxx-type.at:432: testing GLR: Merge conflicting parses, impure, no locations ... ./cxx-type.at:433: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o types.c types.y ./c++.at:1555: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stderr: stdout: ./cxx-type.at:429: $PREPARSER ./types test-input types.y:77.8-37: warning: unset value: $$ [-Wother] types.y: warning: 1 reduce/reduce conflict [-Wconflicts-rr] types.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples ./cxx-type.at:433: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o types types.c $LIBS stderr: 17.5: syntax error ./cxx-type.at:429: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./cxx-type.at:429: $PREPARSER ./types -p test-input stderr: Starting parse Entering state 0 Reducing stack 0 by rule 1 (line 71): -> $$ = nterm prog (1.1: ) Entering state 1 Reading a token Next token is token ID (3.0: ) Shifting token ID (3.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (3.0: ) -> $$ = nterm expr (3.0: ) Entering state 8 Reading a token Next token is token '+' (3.2: ) Shifting token '+' (3.2: ) Entering state 15 Reading a token Next token is token ID (3.4: ) Shifting token ID (3.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (3.4: ) -> $$ = nterm expr (3.4: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (3.0: ) $2 = token '+' (3.2: ) $3 = nterm expr (3.4: ) -> $$ = nterm expr (3.0-4: ) Entering state 8 Reading a token Next token is token ';' (3.5: ) Shifting token ';' (3.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (3.0-4: ) $2 = token ';' (3.5: ) -> $$ = nterm stmt (3.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1: ) $2 = nterm stmt (3.0-5: ) -> $$ = nterm prog (1.1-3.5: ) Entering state 1 Reading a token Next token is token TYPENAME (5.0: ) Shifting token TYPENAME (5.0: ) Entering state 4 Reading a token Next token is token ID (5.2: ) Shifting token ID (5.2: ) Entering state 11 Reducing stack 0 by rule 13 (line 104): $1 = token ID (5.2: ) -> $$ = nterm declarator (5.2: ) Entering state 13 Reading a token Next token is token ';' (5.3: ) Shifting token ';' (5.3: ) Entering state 23 Reducing stack 0 by rule 11 (line 97): $1 = token TYPENAME (5.0: ) $2 = nterm declarator (5.2: ) $3 = token ';' (5.3: ) -> $$ = nterm decl (5.0-3: ) Entering state 9 Reducing stack 0 by rule 4 (line 85): $1 = nterm decl (5.0-3: ) -> $$ = nterm stmt (5.0-3: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-3.5: ) $2 = nterm stmt (5.0-3: ) -> $$ = nterm prog (1.1-5.3: ) Entering state 1 Reading a token Next token is token TYPENAME (7.0: ) Shifting token TYPENAME (7.0: ) Entering state 4 Reading a token Next token is token ID (7.2: ) Shifting token ID (7.2: ) Entering state 11 Reducing stack 0 by rule 13 (line 104): $1 = token ID (7.2: ) -> $$ = nterm declarator (7.2: ) Entering state 13 Reading a token Next token is token '=' (7.4: ) Shifting token '=' (7.4: ) Entering state 22 Reading a token Next token is token ID (7.6: ) Shifting token ID (7.6: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (7.6: ) -> $$ = nterm expr (7.6: ) Entering state 29 Reading a token Next token is token ';' (7.7: ) Shifting token ';' (7.7: ) Entering state 30 Reducing stack 0 by rule 12 (line 99): $1 = token TYPENAME (7.0: ) $2 = nterm declarator (7.2: ) $3 = token '=' (7.4: ) $4 = nterm expr (7.6: ) $5 = token ';' (7.7: ) -> $$ = nterm decl (7.0-7: ) Entering state 9 Reducing stack 0 by rule 4 (line 85): $1 = nterm decl (7.0-7: ) -> $$ = nterm stmt (7.0-7: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-5.3: ) $2 = nterm stmt (7.0-7: ) -> $$ = nterm prog (1.1-7.7: ) Entering state 1 Reading a token Next token is token ID (9.0: ) Shifting token ID (9.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (9.0: ) -> $$ = nterm expr (9.0: ) Entering state 8 Reading a token Next token is token '=' (9.2: ) Shifting token '=' (9.2: ) Entering state 14 Reading a token Next token is token ID (9.4: ) Shifting token ID (9.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (9.4: ) -> $$ = nterm expr (9.4: ) Entering state 24 Reading a token Next token is token ';' (9.5: ) Reducing stack 0 by rule 10 (line 94): $1 = nterm expr (9.0: ) $2 = token '=' (9.2: ) $3 = nterm expr (9.4: ) -> $$ = nterm expr (9.0-4: ) Entering state 8 Next token is token ';' (9.5: ) Shifting token ';' (9.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (9.0-4: ) $2 = token ';' (9.5: ) -> $$ = nterm stmt (9.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-7.7: ) $2 = nterm stmt (9.0-5: ) -> $$ = nterm prog (1.1-9.5: ) Entering state 1 Reading a token Next token is token TYPENAME (11.0: ) Shifting token TYPENAME (11.0: ) Entering state 4 Reading a token Next token is token '(' (11.2: ) Shifting token '(' (11.2: ) Entering state 12 Reading a token Next token is token ID (11.3: ) Shifting token ID (11.3: ) Entering state 18 Reading a token Next token is token ')' (11.4: ) Stack 0 Entering state 18 Next token is token ')' (11.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (11.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (11.4: ) Stack 1 Entering state 21 Next token is token ')' (11.4: ) On stack 0, shifting token ')' (11.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (11.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '+' (11.6: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '+' (11.6: ) Stack 1 dies. Removing dead stacks. On stack 0, shifting token '+' (11.6: ) Stack 0 now in state 15 Reducing stack -1 by rule 7 (line 90): $1 = token ID (11.3: ) -> $$ = nterm expr (11.3: ) Reducing stack -1 by rule 8 (line 91): $1 = token TYPENAME (11.0: ) $2 = token '(' (11.2: ) $3 = nterm expr (11.3: ) $4 = token ')' (11.4: ) -> $$ = nterm expr (11.0-4: ) Returning to deterministic operation. Entering state 15 Reading a token Next token is token ID (11.8: ) Shifting token ID (11.8: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (11.8: ) -> $$ = nterm expr (11.8: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (11.0-4: ) $2 = token '+' (11.6: ) $3 = nterm expr (11.8: ) -> $$ = nterm expr (11.0-8: ) Entering state 8 Reading a token Next token is token ';' (11.9: ) Shifting token ';' (11.9: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (11.0-8: ) $2 = token ';' (11.9: ) -> $$ = nterm stmt (11.0-9: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-9.5: ) $2 = nterm stmt (11.0-9: ) -> $$ = nterm prog (1.1-11.9: ) Entering state 1 Reading a token Next token is token TYPENAME (13.0: ) Shifting token TYPENAME (13.0: ) Entering state 4 Reading a token Next token is token '(' (13.2: ) Shifting token '(' (13.2: ) Entering state 12 Reading a token Next token is token ID (13.3: ) Shifting token ID (13.3: ) Entering state 18 Reading a token Next token is token ')' (13.4: ) Stack 0 Entering state 18 Next token is token ')' (13.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (13.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (13.4: ) Stack 1 Entering state 21 Next token is token ')' (13.4: ) On stack 0, shifting token ')' (13.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (13.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token ';' (13.5: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token ';' (13.5: ) On stack 0, shifting token ';' (13.5: ) Stack 0 now in state 16 On stack 1, shifting token ';' (13.5: ) Stack 1 now in state 23 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 84); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 72); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME (15.0: ) Stack 1 Entering state 23 Reduced stack 1 by rule 11 (line 97); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 85); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 72); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME (15.0: ) Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 104): $1 = token ID (13.3: ) -> $$ = nterm declarator (13.3: ) Reducing stack -1 by rule 14 (line 105): $1 = token '(' (13.2: ) $2 = nterm declarator (13.3: ) $3 = token ')' (13.4: ) -> $$ = nterm declarator (13.2-4: ) Reducing stack -1 by rule 11 (line 97): $1 = token TYPENAME (13.0: ) $2 = nterm declarator (13.2-4: ) $3 = token ';' (13.5: ) -> $$ = nterm decl (13.0-5: ) Reducing stack -1 by rule 4 (line 85): $1 = nterm decl (13.0-5: ) -> $$ = nterm stmt (13.0-5: ) Reducing stack -1 by rule 2 (line 72): $1 = nterm prog (1.1-11.9: ) $2 = nterm stmt (13.0-5: ) -> $$ = nterm prog (1.1-13.5: ) Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' (15.2: ) Shifting token '(' (15.2: ) Entering state 12 Reading a token Next token is token ID (15.3: ) Shifting token ID (15.3: ) Entering state 18 Reading a token Next token is token ')' (15.4: ) Stack 0 Entering state 18 Next token is token ')' (15.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (15.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (15.4: ) Stack 1 Entering state 21 Next token is token ')' (15.4: ) On stack 0, shifting token ')' (15.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (15.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '=' (15.6: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '=' (15.6: ) On stack 0, shifting token '=' (15.6: ) Stack 0 now in state 14 On stack 1, shifting token '=' (15.6: ) Stack 1 now in state 22 Stack 0 Entering state 14 Reading a token Next token is token ID (15.8: ) Stack 1 Entering state 22 Next token is token ID (15.8: ) On stack 0, shifting token ID (15.8: ) Stack 0 now in state 5 On stack 1, shifting token ID (15.8: ) Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token '+' (15.10: ) Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 90); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token '+' (15.10: ) On stack 0, shifting token '+' (15.10: ) Stack 0 now in state 15 On stack 1, shifting token '+' (15.10: ) Stack 1 now in state 15 Stack 0 Entering state 15 Reading a token Next token is token ID (15.12: ) Stack 1 Entering state 15 Next token is token ID (15.12: ) On stack 0, shifting token ID (15.12: ) Stack 0 now in state 5 On stack 1, shifting token ID (15.12: ) Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 25. Stack 0 Entering state 25 Reduced stack 0 by rule 9 (line 93); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token ';' (15.13: ) Reduced stack 0 by rule 10 (line 94); action deferred. Now in state 8. Stack 0 Entering state 8 Next token is token ';' (15.13: ) Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 90); action deferred. Now in state 25. Stack 1 Entering state 25 Reduced stack 1 by rule 9 (line 93); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token ';' (15.13: ) On stack 0, shifting token ';' (15.13: ) Stack 0 now in state 16 On stack 1, shifting token ';' (15.13: ) Stack 1 now in state 30 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 84); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 72); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME (17.0: ) Stack 1 Entering state 30 Reduced stack 1 by rule 12 (line 99); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 85); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 72); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME (17.0: ) Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 104): $1 = token ID (15.3: ) -> $$ = nterm declarator (15.3: ) Reducing stack -1 by rule 14 (line 105): $1 = token '(' (15.2: ) $2 = nterm declarator (15.3: ) $3 = token ')' (15.4: ) -> $$ = nterm declarator (15.2-4: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.8: ) -> $$ = nterm expr (15.8: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.12: ) -> $$ = nterm expr (15.12: ) Reducing stack -1 by rule 9 (line 93): $1 = nterm expr (15.8: ) $2 = token '+' (15.10: ) $3 = nterm expr (15.12: ) -> $$ = nterm expr (15.8-12: ) Reducing stack -1 by rule 12 (line 99): $1 = token TYPENAME (15.0: ) $2 = nterm declarator (15.2-4: ) $3 = token '=' (15.6: ) $4 = nterm expr (15.8-12: ) $5 = token ';' (15.13: ) -> $$ = nterm decl (15.0-13: ) Reducing stack -1 by rule 4 (line 85): $1 = nterm decl (15.0-13: ) -> $$ = nterm stmt (15.0-13: ) Reducing stack -1 by rule 2 (line 72): $1 = nterm prog (1.1-13.5: ) $2 = nterm stmt (15.0-13: ) -> $$ = nterm prog (1.1-15.13: ) Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' (17.2: ) Shifting token '(' (17.2: ) Entering state 12 Reading a token Next token is token ID (17.3: ) Shifting token ID (17.3: ) Entering state 18 Reading a token Next token is token ID (17.5: ) Reducing stack 0 by rule 7 (line 90): $1 = token ID (17.3: ) -> $$ = nterm expr (17.3: ) Entering state 20 Next token is token ID (17.5: ) 17.5: syntax error Error: popping nterm expr (17.3: ) Error: popping token '(' (17.2: ) Error: popping token TYPENAME (17.0: ) Shifting token error (17.0-5: ) Entering state 3 Next token is token ID (17.5: ) Error: discarding token ID (17.5: ) Reading a token Next token is token ')' (17.6: ) Error: discarding token ')' (17.6: ) Reading a token Next token is token '=' (17.8: ) Error: discarding token '=' (17.8: ) Reading a token Next token is token ID (17.10: ) Error: discarding token ID (17.10: ) Reading a token Next token is token '+' (17.12: ) Error: discarding token '+' (17.12: ) Reading a token Next token is token ID (17.14: ) Error: discarding token ID (17.14: ) Reading a token Next token is token ';' (17.15: ) Entering state 3 Next token is token ';' (17.15: ) Shifting token ';' (17.15: ) Entering state 10 Reducing stack 0 by rule 5 (line 86): $1 = token error (17.0-14: ) $2 = token ';' (17.15: ) -> $$ = nterm stmt (17.0-15: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-15.13: ) $2 = nterm stmt (17.0-15: ) -> $$ = nterm prog (1.1-17.15: ) Entering state 1 Reading a token Next token is token ID (19.0: ) Shifting token ID (19.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (19.0: ) -> $$ = nterm expr (19.0: ) Entering state 8 Reading a token Next token is token '+' (19.2: ) Shifting token '+' (19.2: ) Entering state 15 Reading a token Next token is token ID (19.4: ) Shifting token ID (19.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (19.4: ) -> $$ = nterm expr (19.4: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (19.0: ) $2 = token '+' (19.2: ) $3 = nterm expr (19.4: ) -> $$ = nterm expr (19.0-4: ) Entering state 8 Reading a token Next token is token ';' (19.5: ) Shifting token ';' (19.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (19.0-4: ) $2 = token ';' (19.5: ) -> $$ = nterm stmt (19.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-17.15: ) $2 = nterm stmt (19.0-5: ) -> $$ = nterm prog (1.1-19.5: ) Entering state 1 Reading a token Next token is token '@' (21.0: ) Shifting token '@' (21.0: ) Entering state 6 Reducing stack 0 by rule 6 (line 87): $1 = token '@' (21.0: ) Cleanup: popping nterm prog (1.1-19.5: ) ./cxx-type.at:429: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reducing stack 0 by rule 1 (line 71): -> $$ = nterm prog (1.1: ) Entering state 1 Reading a token Next token is token ID (3.0: ) Shifting token ID (3.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (3.0: ) -> $$ = nterm expr (3.0: ) Entering state 8 Reading a token Next token is token '+' (3.2: ) Shifting token '+' (3.2: ) Entering state 15 Reading a token Next token is token ID (3.4: ) Shifting token ID (3.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (3.4: ) -> $$ = nterm expr (3.4: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (3.0: ) $2 = token '+' (3.2: ) $3 = nterm expr (3.4: ) -> $$ = nterm expr (3.0-4: ) Entering state 8 Reading a token Next token is token ';' (3.5: ) Shifting token ';' (3.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (3.0-4: ) $2 = token ';' (3.5: ) -> $$ = nterm stmt (3.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1: ) $2 = nterm stmt (3.0-5: ) -> $$ = nterm prog (1.1-3.5: ) Entering state 1 Reading a token Next token is token TYPENAME (5.0: ) Shifting token TYPENAME (5.0: ) Entering state 4 Reading a token Next token is token ID (5.2: ) Shifting token ID (5.2: ) Entering state 11 Reducing stack 0 by rule 13 (line 104): $1 = token ID (5.2: ) -> $$ = nterm declarator (5.2: ) Entering state 13 Reading a token Next token is token ';' (5.3: ) Shifting token ';' (5.3: ) Entering state 23 Reducing stack 0 by rule 11 (line 97): $1 = token TYPENAME (5.0: ) $2 = nterm declarator (5.2: ) $3 = token ';' (5.3: ) -> $$ = nterm decl (5.0-3: ) Entering state 9 Reducing stack 0 by rule 4 (line 85): $1 = nterm decl (5.0-3: ) -> $$ = nterm stmt (5.0-3: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-3.5: ) $2 = nterm stmt (5.0-3: ) -> $$ = nterm prog (1.1-5.3: ) Entering state 1 Reading a token Next token is token TYPENAME (7.0: ) Shifting token TYPENAME (7.0: ) Entering state 4 Reading a token Next token is token ID (7.2: ) Shifting token ID (7.2: ) Entering state 11 Reducing stack 0 by rule 13 (line 104): $1 = token ID (7.2: ) -> $$ = nterm declarator (7.2: ) Entering state 13 Reading a token Next token is token '=' (7.4: ) Shifting token '=' (7.4: ) Entering state 22 Reading a token Next token is token ID (7.6: ) Shifting token ID (7.6: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (7.6: ) -> $$ = nterm expr (7.6: ) Entering state 29 Reading a token Next token is token ';' (7.7: ) Shifting token ';' (7.7: ) Entering state 30 Reducing stack 0 by rule 12 (line 99): $1 = token TYPENAME (7.0: ) $2 = nterm declarator (7.2: ) $3 = token '=' (7.4: ) $4 = nterm expr (7.6: ) $5 = token ';' (7.7: ) -> $$ = nterm decl (7.0-7: ) Entering state 9 Reducing stack 0 by rule 4 (line 85): $1 = nterm decl (7.0-7: ) -> $$ = nterm stmt (7.0-7: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-5.3: ) $2 = nterm stmt (7.0-7: ) -> $$ = nterm prog (1.1-7.7: ) Entering state 1 Reading a token Next token is token ID (9.0: ) Shifting token ID (9.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (9.0: ) -> $$ = nterm expr (9.0: ) Entering state 8 Reading a token Next token is token '=' (9.2: ) Shifting token '=' (9.2: ) Entering state 14 Reading a token Next token is token ID (9.4: ) Shifting token ID (9.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (9.4: ) -> $$ = nterm expr (9.4: ) Entering state 24 Reading a token Next token is token ';' (9.5: ) Reducing stack 0 by rule 10 (line 94): $1 = nterm expr (9.0: ) $2 = token '=' (9.2: ) $3 = nterm expr (9.4: ) -> $$ = nterm expr (9.0-4: ) Entering state 8 Next token is token ';' (9.5: ) Shifting token ';' (9.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (9.0-4: ) $2 = token ';' (9.5: ) -> $$ = nterm stmt (9.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-7.7: ) $2 = nterm stmt (9.0-5: ) -> $$ = nterm prog (1.1-9.5: ) Entering state 1 Reading a token Next token is token TYPENAME (11.0: ) Shifting token TYPENAME (11.0: ) Entering state 4 Reading a token Next token is token '(' (11.2: ) Shifting token '(' (11.2: ) Entering state 12 Reading a token Next token is token ID (11.3: ) Shifting token ID (11.3: ) Entering state 18 Reading a token Next token is token ')' (11.4: ) Stack 0 Entering state 18 Next token is token ')' (11.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (11.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (11.4: ) Stack 1 Entering state 21 Next token is token ')' (11.4: ) On stack 0, shifting token ')' (11.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (11.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '+' (11.6: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '+' (11.6: ) Stack 1 dies. Removing dead stacks. On stack 0, shifting token '+' (11.6: ) Stack 0 now in state 15 Reducing stack -1 by rule 7 (line 90): $1 = token ID (11.3: ) -> $$ = nterm expr (11.3: ) Reducing stack -1 by rule 8 (line 91): $1 = token TYPENAME (11.0: ) $2 = token '(' (11.2: ) $3 = nterm expr (11.3: ) $4 = token ')' (11.4: ) -> $$ = nterm expr (11.0-4: ) Returning to deterministic operation. Entering state 15 Reading a token Next token is token ID (11.8: ) Shifting token ID (11.8: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (11.8: ) -> $$ = nterm expr (11.8: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (11.0-4: ) $2 = token '+' (11.6: ) $3 = nterm expr (11.8: ) -> $$ = nterm expr (11.0-8: ) Entering state 8 Reading a token Next token is token ';' (11.9: ) Shifting token ';' (11.9: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (11.0-8: ) $2 = token ';' (11.9: ) -> $$ = nterm stmt (11.0-9: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-9.5: ) $2 = nterm stmt (11.0-9: ) -> $$ = nterm prog (1.1-11.9: ) Entering state 1 Reading a token Next token is token TYPENAME (13.0: ) Shifting token TYPENAME (13.0: ) Entering state 4 Reading a token Next token is token '(' (13.2: ) Shifting token '(' (13.2: ) Entering state 12 Reading a token Next token is token ID (13.3: ) Shifting token ID (13.3: ) Entering state 18 Reading a token Next token is token ')' (13.4: ) Stack 0 Entering state 18 Next token is token ')' (13.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (13.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (13.4: ) Stack 1 Entering state 21 Next token is token ')' (13.4: ) On stack 0, shifting token ')' (13.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (13.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token ';' (13.5: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token ';' (13.5: ) On stack 0, shifting token ';' (13.5: ) Stack 0 now in state 16 On stack 1, shifting token ';' (13.5: ) Stack 1 now in state 23 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 84); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 72); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME (15.0: ) Stack 1 Entering state 23 Reduced stack 1 by rule 11 (line 97); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 85); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 72); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME (15.0: ) Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 104): $1 = token ID (13.3: ) -> $$ = nterm declarator (13.3: ) Reducing stack -1 by rule 14 (line 105): $1 = token '(' (13.2: ) $2 = nterm declarator (13.3: ) $3 = token ')' (13.4: ) -> $$ = nterm declarator (13.2-4: ) Reducing stack -1 by rule 11 (line 97): $1 = token TYPENAME (13.0: ) $2 = nterm declarator (13.2-4: ) $3 = token ';' (13.5: ) -> $$ = nterm decl (13.0-5: ) Reducing stack -1 by rule 4 (line 85): $1 = nterm decl (13.0-5: ) -> $$ = nterm stmt (13.0-5: ) Reducing stack -1 by rule 2 (line 72): $1 = nterm prog (1.1-11.9: ) $2 = nterm stmt (13.0-5: ) -> $$ = nterm prog (1.1-13.5: ) Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' (15.2: ) Shifting token '(' (15.2: ) Entering state 12 Reading a token Next token is token ID (15.3: ) Shifting token ID (15.3: ) Entering state 18 Reading a token Next token is token ')' (15.4: ) Stack 0 Entering state 18 Next token is token ')' (15.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (15.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (15.4: ) Stack 1 Entering state 21 Next token is token ')' (15.4: ) On stack 0, shifting token ')' (15.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (15.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '=' (15.6: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '=' (15.6: ) On stack 0, shifting token '=' (15.6: ) Stack 0 now in state 14 On stack 1, shifting token '=' (15.6: ) Stack 1 now in state 22 Stack 0 Entering state 14 Reading a token Next token is token ID (15.8: ) Stack 1 Entering state 22 Next token is token ID (15.8: ) On stack 0, shifting token ID (15.8: ) Stack 0 now in state 5 On stack 1, shifting token ID (15.8: ) Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token '+' (15.10: ) Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 90); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token '+' (15.10: ) On stack 0, shifting token '+' (15.10: ) Stack 0 now in state 15 On stack 1, shifting token '+' (15.10: ) Stack 1 now in state 15 Stack 0 Entering state 15 Reading a token Next token is token ID (15.12: ) Stack 1 Entering state 15 Next token is token ID (15.12: ) On stack 0, shifting token ID (15.12: ) Stack 0 now in state 5 On stack 1, shifting token ID (15.12: ) Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 25. Stack 0 Entering state 25 Reduced stack 0 by rule 9 (line 93); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token ';' (15.13: ) Reduced stack 0 by rule 10 (line 94); action deferred. Now in state 8. Stack 0 Entering state 8 Next token is token ';' (15.13: ) Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 90); action deferred. Now in state 25. Stack 1 Entering state 25 Reduced stack 1 by rule 9 (line 93); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token ';' (15.13: ) On stack 0, shifting token ';' (15.13: ) Stack 0 now in state 16 On stack 1, shifting token ';' (15.13: ) Stack 1 now in state 30 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 84); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 72); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME (17.0: ) Stack 1 Entering state 30 Reduced stack 1 by rule 12 (line 99); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 85); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 72); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME (17.0: ) Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 104): $1 = token ID (15.3: ) -> $$ = nterm declarator (15.3: ) Reducing stack -1 by rule 14 (line 105): $1 = token '(' (15.2: ) $2 = nterm declarator (15.3: ) $3 = token ')' (15.4: ) -> $$ = nterm declarator (15.2-4: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.8: ) -> $$ = nterm expr (15.8: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.12: ) -> $$ = nterm expr (15.12: ) Reducing stack -1 by rule 9 (line 93): $1 = nterm expr (15.8: ) $2 = token '+' (15.10: ) $3 = nterm expr (15.12: ) -> $$ = nterm expr (15.8-12: ) Reducing stack -1 by rule 12 (line 99): $1 = token TYPENAME (15.0: ) $2 = nterm declarator (15.2-4: ) $3 = token '=' (15.6: ) $4 = nterm expr (15.8-12: ) $5 = token ';' (15.13: ) -> $$ = nterm decl (15.0-13: ) Reducing stack -1 by rule 4 (line 85): $1 = nterm decl (15.0-13: ) -> $$ = nterm stmt (15.0-13: ) Reducing stack -1 by rule 2 (line 72): $1 = nterm prog (1.1-13.5: ) $2 = nterm stmt (15.0-13: ) -> $$ = nterm prog (1.1-15.13: ) Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' (17.2: ) Shifting token '(' (17.2: ) Entering state 12 Reading a token Next token is token ID (17.3: ) Shifting token ID (17.3: ) Entering state 18 Reading a token Next token is token ID (17.5: ) Reducing stack 0 by rule 7 (line 90): $1 = token ID (17.3: ) -> $$ = nterm expr (17.3: ) Entering state 20 Next token is token ID (17.5: ) 17.5: syntax error Error: popping nterm expr (17.3: ) Error: popping token '(' (17.2: ) Error: popping token TYPENAME (17.0: ) Shifting token error (17.0-5: ) Entering state 3 Next token is token ID (17.5: ) Error: discarding token ID (17.5: ) Reading a token Next token is token ')' (17.6: ) Error: discarding token ')' (17.6: ) Reading a token Next token is token '=' (17.8: ) Error: discarding token '=' (17.8: ) Reading a token Next token is token ID (17.10: ) Error: discarding token ID (17.10: ) Reading a token Next token is token '+' (17.12: ) Error: discarding token '+' (17.12: ) Reading a token Next token is token ID (17.14: ) Error: discarding token ID (17.14: ) Reading a token Next token is token ';' (17.15: ) Entering state 3 Next token is token ';' (17.15: ) Shifting token ';' (17.15: ) Entering state 10 Reducing stack 0 by rule 5 (line 86): $1 = token error (17.0-14: ) $2 = token ';' (17.15: ) -> $$ = nterm stmt (17.0-15: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-15.13: ) $2 = nterm stmt (17.0-15: ) -> $$ = nterm prog (1.1-17.15: ) Entering state 1 Reading a token Next token is token ID (19.0: ) Shifting token ID (19.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (19.0: ) -> $$ = nterm expr (19.0: ) Entering state 8 Reading a token Next token is token '+' (19.2: ) Shifting token '+' (19.2: ) Entering state 15 Reading a token Next token is token ID (19.4: ) Shifting token ID (19.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (19.4: ) -> $$ = nterm expr (19.4: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (19.0: ) $2 = token '+' (19.2: ) $3 = nterm expr (19.4: ) -> $$ = nterm expr (19.0-4: ) Entering state 8 Reading a token Next token is token ';' (19.5: ) Shifting token ';' (19.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (19.0-4: ) $2 = token ';' (19.5: ) -> $$ = nterm stmt (19.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-17.15: ) $2 = nterm stmt (19.0-5: ) -> $$ = nterm prog (1.1-19.5: ) Entering state 1 Reading a token Next token is token '@' (21.0: ) Shifting token '@' (21.0: ) Entering state 6 Reducing stack 0 by rule 6 (line 87): $1 = token '@' (21.0: ) Cleanup: popping nterm prog (1.1-19.5: ) 708. cxx-type.at:426: ok 710. cxx-type.at:438: testing GLR: Merge conflicting parses, impure, locations ... ./cxx-type.at:439: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o types.c types.y stderr: types.y:87.8-37: warning: unset value: $$ [-Wother] types.y: warning: 1 reduce/reduce conflict [-Wconflicts-rr] types.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples ./cxx-type.at:439: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o types types.c $LIBS stderr: stdout: ./cxx-type.at:435: $PREPARSER ./types test-input stderr: syntax error ./cxx-type.at:435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./cxx-type.at:435: $PREPARSER ./types -p test-input stderr: Starting parse Entering state 0 Reducing stack 0 by rule 1 (line 64): -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token ';' () Shifting token ';' () Entering state 23 Reducing stack 0 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token '=' () Shifting token '=' () Entering state 22 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 29 Reading a token Next token is token ';' () Shifting token ';' () Entering state 30 Reducing stack 0 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 14 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 24 Reading a token Next token is token ';' () Reducing stack 0 by rule 10 (line 84): $1 = nterm expr () $2 = token '=' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '+' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '+' () Stack 1 dies. Removing dead stacks. On stack 0, shifting token '+' () Stack 0 now in state 15 Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Returning to deterministic operation. Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token ';' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 23 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 23 Reduced stack 1 by rule 11 (line 87); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Reducing stack -1 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '=' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '=' () On stack 0, shifting token '=' () Stack 0 now in state 14 On stack 1, shifting token '=' () Stack 1 now in state 22 Stack 0 Entering state 14 Reading a token Next token is token ID () Stack 1 Entering state 22 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token '+' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token '+' () On stack 0, shifting token '+' () Stack 0 now in state 15 On stack 1, shifting token '+' () Stack 1 now in state 15 Stack 0 Entering state 15 Reading a token Next token is token ID () Stack 1 Entering state 15 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 25. Stack 0 Entering state 25 Reduced stack 0 by rule 9 (line 83); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token ';' () Reduced stack 0 by rule 10 (line 84); action deferred. Now in state 8. Stack 0 Entering state 8 Next token is token ';' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 25. Stack 1 Entering state 25 Reduced stack 1 by rule 9 (line 83); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 30 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 30 Reduced stack 1 by rule 12 (line 89); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 10 (line 84): $1 = nterm expr () $2 = token '=' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ID () Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 20 Next token is token ID () syntax error Error: popping nterm expr () Error: popping token '(' () Error: popping token TYPENAME () Shifting token error () Entering state 3 Next token is token ID () Error: discarding token ID () Reading a token Next token is token ')' () Error: discarding token ')' () Reading a token Next token is token '=' () Error: discarding token '=' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token ';' () Entering state 3 Next token is token ';' () Shifting token ';' () Entering state 10 Reducing stack 0 by rule 5 (line 76): $1 = token error () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token '@' () Shifting token '@' () Entering state 6 Reducing stack 0 by rule 6 (line 77): $1 = token '@' () Cleanup: popping nterm prog () ./cxx-type.at:435: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reducing stack 0 by rule 1 (line 64): -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token ';' () Shifting token ';' () Entering state 23 Reducing stack 0 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token '=' () Shifting token '=' () Entering state 22 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 29 Reading a token Next token is token ';' () Shifting token ';' () Entering state 30 Reducing stack 0 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 14 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 24 Reading a token Next token is token ';' () Reducing stack 0 by rule 10 (line 84): $1 = nterm expr () $2 = token '=' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '+' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '+' () Stack 1 dies. Removing dead stacks. On stack 0, shifting token '+' () Stack 0 now in state 15 Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Returning to deterministic operation. Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token ';' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 23 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 23 Reduced stack 1 by rule 11 (line 87); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Reducing stack -1 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '=' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '=' () On stack 0, shifting token '=' () Stack 0 now in state 14 On stack 1, shifting token '=' () Stack 1 now in state 22 Stack 0 Entering state 14 Reading a token Next token is token ID () Stack 1 Entering state 22 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token '+' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token '+' () On stack 0, shifting token '+' () Stack 0 now in state 15 On stack 1, shifting token '+' () Stack 1 now in state 15 Stack 0 Entering state 15 Reading a token Next token is token ID () Stack 1 Entering state 15 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 25. Stack 0 Entering state 25 Reduced stack 0 by rule 9 (line 83); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token ';' () Reduced stack 0 by rule 10 (line 84); action deferred. Now in state 8. Stack 0 Entering state 8 Next token is token ';' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 25. Stack 1 Entering state 25 Reduced stack 1 by rule 9 (line 83); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 30 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 30 Reduced stack 1 by rule 12 (line 89); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 10 (line 84): $1 = nterm expr () $2 = token '=' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ID () Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 20 Next token is token ID () syntax error Error: popping nterm expr () Error: popping token '(' () Error: popping token TYPENAME () Shifting token error () Entering state 3 Next token is token ID () Error: discarding token ID () Reading a token Next token is token ')' () Error: discarding token ')' () Reading a token Next token is token '=' () Error: discarding token '=' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token ';' () Entering state 3 Next token is token ';' () Shifting token ';' () Entering state 10 Reducing stack 0 by rule 5 (line 76): $1 = token error () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token '@' () Shifting token '@' () Entering state 6 Reducing stack 0 by rule 6 (line 77): $1 = token '@' () Cleanup: popping nterm prog () 709. cxx-type.at:432: ok 711. cxx-type.at:444: testing GLR: Merge conflicting parses, pure, no locations ... ./cxx-type.at:445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o types.c types.y stderr: types.y:77.8-37: warning: unset value: $$ [-Wother] types.y: warning: 1 reduce/reduce conflict [-Wconflicts-rr] types.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples ./cxx-type.at:445: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o types types.c $LIBS stderr: stdout: ./cxx-type.at:441: $PREPARSER ./types test-input stderr: 17.5: syntax error ./cxx-type.at:441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./cxx-type.at:441: $PREPARSER ./types -p test-input stderr: Starting parse Entering state 0 Reducing stack 0 by rule 1 (line 71): -> $$ = nterm prog (1.1: ) Entering state 1 Reading a token Next token is token ID (3.0: ) Shifting token ID (3.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (3.0: ) -> $$ = nterm expr (3.0: ) Entering state 8 Reading a token Next token is token '+' (3.2: ) Shifting token '+' (3.2: ) Entering state 15 Reading a token Next token is token ID (3.4: ) Shifting token ID (3.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (3.4: ) -> $$ = nterm expr (3.4: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (3.0: ) $2 = token '+' (3.2: ) $3 = nterm expr (3.4: ) -> $$ = nterm expr (3.0-4: ) Entering state 8 Reading a token Next token is token ';' (3.5: ) Shifting token ';' (3.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (3.0-4: ) $2 = token ';' (3.5: ) -> $$ = nterm stmt (3.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1: ) $2 = nterm stmt (3.0-5: ) -> $$ = nterm prog (1.1-3.5: ) Entering state 1 Reading a token Next token is token TYPENAME (5.0: ) Shifting token TYPENAME (5.0: ) Entering state 4 Reading a token Next token is token ID (5.2: ) Shifting token ID (5.2: ) Entering state 11 Reducing stack 0 by rule 13 (line 104): $1 = token ID (5.2: ) -> $$ = nterm declarator (5.2: ) Entering state 13 Reading a token Next token is token ';' (5.3: ) Shifting token ';' (5.3: ) Entering state 23 Reducing stack 0 by rule 11 (line 97): $1 = token TYPENAME (5.0: ) $2 = nterm declarator (5.2: ) $3 = token ';' (5.3: ) -> $$ = nterm decl (5.0-3: ) Entering state 9 Reducing stack 0 by rule 4 (line 85): $1 = nterm decl (5.0-3: ) -> $$ = nterm stmt (5.0-3: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-3.5: ) $2 = nterm stmt (5.0-3: ) -> $$ = nterm prog (1.1-5.3: ) Entering state 1 Reading a token Next token is token TYPENAME (7.0: ) Shifting token TYPENAME (7.0: ) Entering state 4 Reading a token Next token is token ID (7.2: ) Shifting token ID (7.2: ) Entering state 11 Reducing stack 0 by rule 13 (line 104): $1 = token ID (7.2: ) -> $$ = nterm declarator (7.2: ) Entering state 13 Reading a token Next token is token '=' (7.4: ) Shifting token '=' (7.4: ) Entering state 22 Reading a token Next token is token ID (7.6: ) Shifting token ID (7.6: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (7.6: ) -> $$ = nterm expr (7.6: ) Entering state 29 Reading a token Next token is token ';' (7.7: ) Shifting token ';' (7.7: ) Entering state 30 Reducing stack 0 by rule 12 (line 99): $1 = token TYPENAME (7.0: ) $2 = nterm declarator (7.2: ) $3 = token '=' (7.4: ) $4 = nterm expr (7.6: ) $5 = token ';' (7.7: ) -> $$ = nterm decl (7.0-7: ) Entering state 9 Reducing stack 0 by rule 4 (line 85): $1 = nterm decl (7.0-7: ) -> $$ = nterm stmt (7.0-7: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-5.3: ) $2 = nterm stmt (7.0-7: ) -> $$ = nterm prog (1.1-7.7: ) Entering state 1 Reading a token Next token is token ID (9.0: ) Shifting token ID (9.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (9.0: ) -> $$ = nterm expr (9.0: ) Entering state 8 Reading a token Next token is token '=' (9.2: ) Shifting token '=' (9.2: ) Entering state 14 Reading a token Next token is token ID (9.4: ) Shifting token ID (9.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (9.4: ) -> $$ = nterm expr (9.4: ) Entering state 24 Reading a token Next token is token ';' (9.5: ) Reducing stack 0 by rule 10 (line 94): $1 = nterm expr (9.0: ) $2 = token '=' (9.2: ) $3 = nterm expr (9.4: ) -> $$ = nterm expr (9.0-4: ) Entering state 8 Next token is token ';' (9.5: ) Shifting token ';' (9.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (9.0-4: ) $2 = token ';' (9.5: ) -> $$ = nterm stmt (9.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-7.7: ) $2 = nterm stmt (9.0-5: ) -> $$ = nterm prog (1.1-9.5: ) Entering state 1 Reading a token Next token is token TYPENAME (11.0: ) Shifting token TYPENAME (11.0: ) Entering state 4 Reading a token Next token is token '(' (11.2: ) Shifting token '(' (11.2: ) Entering state 12 Reading a token Next token is token ID (11.3: ) Shifting token ID (11.3: ) Entering state 18 Reading a token Next token is token ')' (11.4: ) Stack 0 Entering state 18 Next token is token ')' (11.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (11.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (11.4: ) Stack 1 Entering state 21 Next token is token ')' (11.4: ) On stack 0, shifting token ')' (11.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (11.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '+' (11.6: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '+' (11.6: ) Stack 1 dies. Removing dead stacks. On stack 0, shifting token '+' (11.6: ) Stack 0 now in state 15 Reducing stack -1 by rule 7 (line 90): $1 = token ID (11.3: ) -> $$ = nterm expr (11.3: ) Reducing stack -1 by rule 8 (line 91): $1 = token TYPENAME (11.0: ) $2 = token '(' (11.2: ) $3 = nterm expr (11.3: ) $4 = token ')' (11.4: ) -> $$ = nterm expr (11.0-4: ) Returning to deterministic operation. Entering state 15 Reading a token Next token is token ID (11.8: ) Shifting token ID (11.8: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (11.8: ) -> $$ = nterm expr (11.8: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (11.0-4: ) $2 = token '+' (11.6: ) $3 = nterm expr (11.8: ) -> $$ = nterm expr (11.0-8: ) Entering state 8 Reading a token Next token is token ';' (11.9: ) Shifting token ';' (11.9: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (11.0-8: ) $2 = token ';' (11.9: ) -> $$ = nterm stmt (11.0-9: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-9.5: ) $2 = nterm stmt (11.0-9: ) -> $$ = nterm prog (1.1-11.9: ) Entering state 1 Reading a token Next token is token TYPENAME (13.0: ) Shifting token TYPENAME (13.0: ) Entering state 4 Reading a token Next token is token '(' (13.2: ) Shifting token '(' (13.2: ) Entering state 12 Reading a token Next token is token ID (13.3: ) Shifting token ID (13.3: ) Entering state 18 Reading a token Next token is token ')' (13.4: ) Stack 0 Entering state 18 Next token is token ')' (13.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (13.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (13.4: ) Stack 1 Entering state 21 Next token is token ')' (13.4: ) On stack 0, shifting token ')' (13.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (13.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token ';' (13.5: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token ';' (13.5: ) On stack 0, shifting token ';' (13.5: ) Stack 0 now in state 16 On stack 1, shifting token ';' (13.5: ) Stack 1 now in state 23 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 84); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 72); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME (15.0: ) Stack 1 Entering state 23 Reduced stack 1 by rule 11 (line 97); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 85); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 72); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME (15.0: ) Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 104): $1 = token ID (13.3: ) -> $$ = nterm declarator (13.3: ) Reducing stack -1 by rule 14 (line 105): $1 = token '(' (13.2: ) $2 = nterm declarator (13.3: ) $3 = token ')' (13.4: ) -> $$ = nterm declarator (13.2-4: ) Reducing stack -1 by rule 11 (line 97): $1 = token TYPENAME (13.0: ) $2 = nterm declarator (13.2-4: ) $3 = token ';' (13.5: ) -> $$ = nterm decl (13.0-5: ) Reducing stack -1 by rule 4 (line 85): $1 = nterm decl (13.0-5: ) -> $$ = nterm stmt (13.0-5: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (13.3: ) -> $$ = nterm expr (13.3: ) Reducing stack -1 by rule 8 (line 91): $1 = token TYPENAME (13.0: ) $2 = token '(' (13.2: ) $3 = nterm expr (13.3: ) $4 = token ')' (13.4: ) -> $$ = nterm expr (13.0-4: ) Reducing stack -1 by rule 3 (line 84): $1 = nterm expr (13.0-4: ) $2 = token ';' (13.5: ) -> $$ = nterm stmt (13.0-5: ) Reducing stack -1 by rule 2 (line 72): $1 = nterm prog (1.1-11.9: ) $2 = nterm stmt (13.0-5: ) -> $$ = nterm prog (1.1-13.5: ) Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' (15.2: ) Shifting token '(' (15.2: ) Entering state 12 Reading a token Next token is token ID (15.3: ) Shifting token ID (15.3: ) Entering state 18 Reading a token Next token is token ')' (15.4: ) Stack 0 Entering state 18 Next token is token ')' (15.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (15.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (15.4: ) Stack 1 Entering state 21 Next token is token ')' (15.4: ) On stack 0, shifting token ')' (15.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (15.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '=' (15.6: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '=' (15.6: ) On stack 0, shifting token '=' (15.6: ) Stack 0 now in state 14 On stack 1, shifting token '=' (15.6: ) Stack 1 now in state 22 Stack 0 Entering state 14 Reading a token Next token is token ID (15.8: ) Stack 1 Entering state 22 Next token is token ID (15.8: ) On stack 0, shifting token ID (15.8: ) Stack 0 now in state 5 On stack 1, shifting token ID (15.8: ) Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token '+' (15.10: ) Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 90); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token '+' (15.10: ) On stack 0, shifting token '+' (15.10: ) Stack 0 now in state 15 On stack 1, shifting token '+' (15.10: ) Stack 1 now in state 15 Stack 0 Entering state 15 Reading a token Next token is token ID (15.12: ) Stack 1 Entering state 15 Next token is token ID (15.12: ) On stack 0, shifting token ID (15.12: ) Stack 0 now in state 5 On stack 1, shifting token ID (15.12: ) Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 25. Stack 0 Entering state 25 Reduced stack 0 by rule 9 (line 93); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token ';' (15.13: ) Reduced stack 0 by rule 10 (line 94); action deferred. Now in state 8. Stack 0 Entering state 8 Next token is token ';' (15.13: ) Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 90); action deferred. Now in state 25. Stack 1 Entering state 25 Reduced stack 1 by rule 9 (line 93); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token ';' (15.13: ) On stack 0, shifting token ';' (15.13: ) Stack 0 now in state 16 On stack 1, shifting token ';' (15.13: ) Stack 1 now in state 30 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 84); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 72); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME (17.0: ) Stack 1 Entering state 30 Reduced stack 1 by rule 12 (line 99); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 85); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 72); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME (17.0: ) Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 104): $1 = token ID (15.3: ) -> $$ = nterm declarator (15.3: ) Reducing stack -1 by rule 14 (line 105): $1 = token '(' (15.2: ) $2 = nterm declarator (15.3: ) $3 = token ')' (15.4: ) -> $$ = nterm declarator (15.2-4: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.8: ) -> $$ = nterm expr (15.8: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.12: ) -> $$ = nterm expr (15.12: ) Reducing stack -1 by rule 9 (line 93): $1 = nterm expr (15.8: ) $2 = token '+' (15.10: ) $3 = nterm expr (15.12: ) -> $$ = nterm expr (15.8-12: ) Reducing stack -1 by rule 12 (line 99): $1 = token TYPENAME (15.0: ) $2 = nterm declarator (15.2-4: ) $3 = token '=' (15.6: ) $4 = nterm expr (15.8-12: ) $5 = token ';' (15.13: ) -> $$ = nterm decl (15.0-13: ) Reducing stack -1 by rule 4 (line 85): $1 = nterm decl (15.0-13: ) -> $$ = nterm stmt (15.0-13: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.3: ) -> $$ = nterm expr (15.3: ) Reducing stack -1 by rule 8 (line 91): $1 = token TYPENAME (15.0: ) $2 = token '(' (15.2: ) $3 = nterm expr (15.3: ) $4 = token ')' (15.4: ) -> $$ = nterm expr (15.0-4: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.8: ) -> $$ = nterm expr (15.8: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.12: ) -> $$ = nterm expr (15.12: ) Reducing stack -1 by rule 9 (line 93): $1 = nterm expr (15.8: ) $2 = token '+' (15.10: ) $3 = nterm expr (15.12: ) -> $$ = nterm expr (15.8-12: ) Reducing stack -1 by rule 10 (line 94): $1 = nterm expr (15.0-4: ) $2 = token '=' (15.6: ) $3 = nterm expr (15.8-12: ) -> $$ = nterm expr (15.0-12: ) Reducing stack -1 by rule 3 (line 84): $1 = nterm expr (15.0-12: ) $2 = token ';' (15.13: ) -> $$ = nterm stmt (15.0-13: ) Reducing stack -1 by rule 2 (line 72): $1 = nterm prog (1.1-13.5: ) $2 = nterm stmt (15.0-13: ) -> $$ = nterm prog (1.1-15.13: ) Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' (17.2: ) Shifting token '(' (17.2: ) Entering state 12 Reading a token Next token is token ID (17.3: ) Shifting token ID (17.3: ) Entering state 18 Reading a token Next token is token ID (17.5: ) Reducing stack 0 by rule 7 (line 90): $1 = token ID (17.3: ) -> $$ = nterm expr (17.3: ) Entering state 20 Next token is token ID (17.5: ) 17.5: syntax error Error: popping nterm expr (17.3: ) Error: popping token '(' (17.2: ) Error: popping token TYPENAME (17.0: ) Shifting token error (17.0-5: ) Entering state 3 Next token is token ID (17.5: ) Error: discarding token ID (17.5: ) Reading a token Next token is token ')' (17.6: ) Error: discarding token ')' (17.6: ) Reading a token Next token is token '=' (17.8: ) Error: discarding token '=' (17.8: ) Reading a token Next token is token ID (17.10: ) Error: discarding token ID (17.10: ) Reading a token Next token is token '+' (17.12: ) Error: discarding token '+' (17.12: ) Reading a token Next token is token ID (17.14: ) Error: discarding token ID (17.14: ) Reading a token Next token is token ';' (17.15: ) Entering state 3 Next token is token ';' (17.15: ) Shifting token ';' (17.15: ) Entering state 10 Reducing stack 0 by rule 5 (line 86): $1 = token error (17.0-14: ) $2 = token ';' (17.15: ) -> $$ = nterm stmt (17.0-15: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-15.13: ) $2 = nterm stmt (17.0-15: ) -> $$ = nterm prog (1.1-17.15: ) Entering state 1 Reading a token Next token is token ID (19.0: ) Shifting token ID (19.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (19.0: ) -> $$ = nterm expr (19.0: ) Entering state 8 Reading a token Next token is token '+' (19.2: ) Shifting token '+' (19.2: ) Entering state 15 Reading a token Next token is token ID (19.4: ) Shifting token ID (19.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (19.4: ) -> $$ = nterm expr (19.4: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (19.0: ) $2 = token '+' (19.2: ) $3 = nterm expr (19.4: ) -> $$ = nterm expr (19.0-4: ) Entering state 8 Reading a token Next token is token ';' (19.5: ) Shifting token ';' (19.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (19.0-4: ) $2 = token ';' (19.5: ) -> $$ = nterm stmt (19.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-17.15: ) $2 = nterm stmt (19.0-5: ) -> $$ = nterm prog (1.1-19.5: ) Entering state 1 Reading a token Next token is token '@' (21.0: ) Shifting token '@' (21.0: ) Entering state 6 Reducing stack 0 by rule 6 (line 87): $1 = token '@' (21.0: ) Cleanup: popping nterm prog (1.1-19.5: ) ./cxx-type.at:441: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reducing stack 0 by rule 1 (line 71): -> $$ = nterm prog (1.1: ) Entering state 1 Reading a token Next token is token ID (3.0: ) Shifting token ID (3.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (3.0: ) -> $$ = nterm expr (3.0: ) Entering state 8 Reading a token Next token is token '+' (3.2: ) Shifting token '+' (3.2: ) Entering state 15 Reading a token Next token is token ID (3.4: ) Shifting token ID (3.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (3.4: ) -> $$ = nterm expr (3.4: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (3.0: ) $2 = token '+' (3.2: ) $3 = nterm expr (3.4: ) -> $$ = nterm expr (3.0-4: ) Entering state 8 Reading a token Next token is token ';' (3.5: ) Shifting token ';' (3.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (3.0-4: ) $2 = token ';' (3.5: ) -> $$ = nterm stmt (3.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1: ) $2 = nterm stmt (3.0-5: ) -> $$ = nterm prog (1.1-3.5: ) Entering state 1 Reading a token Next token is token TYPENAME (5.0: ) Shifting token TYPENAME (5.0: ) Entering state 4 Reading a token Next token is token ID (5.2: ) Shifting token ID (5.2: ) Entering state 11 Reducing stack 0 by rule 13 (line 104): $1 = token ID (5.2: ) -> $$ = nterm declarator (5.2: ) Entering state 13 Reading a token Next token is token ';' (5.3: ) Shifting token ';' (5.3: ) Entering state 23 Reducing stack 0 by rule 11 (line 97): $1 = token TYPENAME (5.0: ) $2 = nterm declarator (5.2: ) $3 = token ';' (5.3: ) -> $$ = nterm decl (5.0-3: ) Entering state 9 Reducing stack 0 by rule 4 (line 85): $1 = nterm decl (5.0-3: ) -> $$ = nterm stmt (5.0-3: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-3.5: ) $2 = nterm stmt (5.0-3: ) -> $$ = nterm prog (1.1-5.3: ) Entering state 1 Reading a token Next token is token TYPENAME (7.0: ) Shifting token TYPENAME (7.0: ) Entering state 4 Reading a token Next token is token ID (7.2: ) Shifting token ID (7.2: ) Entering state 11 Reducing stack 0 by rule 13 (line 104): $1 = token ID (7.2: ) -> $$ = nterm declarator (7.2: ) Entering state 13 Reading a token Next token is token '=' (7.4: ) Shifting token '=' (7.4: ) Entering state 22 Reading a token Next token is token ID (7.6: ) Shifting token ID (7.6: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (7.6: ) -> $$ = nterm expr (7.6: ) Entering state 29 Reading a token Next token is token ';' (7.7: ) Shifting token ';' (7.7: ) Entering state 30 Reducing stack 0 by rule 12 (line 99): $1 = token TYPENAME (7.0: ) $2 = nterm declarator (7.2: ) $3 = token '=' (7.4: ) $4 = nterm expr (7.6: ) $5 = token ';' (7.7: ) -> $$ = nterm decl (7.0-7: ) Entering state 9 Reducing stack 0 by rule 4 (line 85): $1 = nterm decl (7.0-7: ) -> $$ = nterm stmt (7.0-7: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-5.3: ) $2 = nterm stmt (7.0-7: ) -> $$ = nterm prog (1.1-7.7: ) Entering state 1 Reading a token Next token is token ID (9.0: ) Shifting token ID (9.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (9.0: ) -> $$ = nterm expr (9.0: ) Entering state 8 Reading a token Next token is token '=' (9.2: ) Shifting token '=' (9.2: ) Entering state 14 Reading a token Next token is token ID (9.4: ) Shifting token ID (9.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (9.4: ) -> $$ = nterm expr (9.4: ) Entering state 24 Reading a token Next token is token ';' (9.5: ) Reducing stack 0 by rule 10 (line 94): $1 = nterm expr (9.0: ) $2 = token '=' (9.2: ) $3 = nterm expr (9.4: ) -> $$ = nterm expr (9.0-4: ) Entering state 8 Next token is token ';' (9.5: ) Shifting token ';' (9.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (9.0-4: ) $2 = token ';' (9.5: ) -> $$ = nterm stmt (9.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-7.7: ) $2 = nterm stmt (9.0-5: ) -> $$ = nterm prog (1.1-9.5: ) Entering state 1 Reading a token Next token is token TYPENAME (11.0: ) Shifting token TYPENAME (11.0: ) Entering state 4 Reading a token Next token is token '(' (11.2: ) Shifting token '(' (11.2: ) Entering state 12 Reading a token Next token is token ID (11.3: ) Shifting token ID (11.3: ) Entering state 18 Reading a token Next token is token ')' (11.4: ) Stack 0 Entering state 18 Next token is token ')' (11.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (11.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (11.4: ) Stack 1 Entering state 21 Next token is token ')' (11.4: ) On stack 0, shifting token ')' (11.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (11.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '+' (11.6: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '+' (11.6: ) Stack 1 dies. Removing dead stacks. On stack 0, shifting token '+' (11.6: ) Stack 0 now in state 15 Reducing stack -1 by rule 7 (line 90): $1 = token ID (11.3: ) -> $$ = nterm expr (11.3: ) Reducing stack -1 by rule 8 (line 91): $1 = token TYPENAME (11.0: ) $2 = token '(' (11.2: ) $3 = nterm expr (11.3: ) $4 = token ')' (11.4: ) -> $$ = nterm expr (11.0-4: ) Returning to deterministic operation. Entering state 15 Reading a token Next token is token ID (11.8: ) Shifting token ID (11.8: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (11.8: ) -> $$ = nterm expr (11.8: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (11.0-4: ) $2 = token '+' (11.6: ) $3 = nterm expr (11.8: ) -> $$ = nterm expr (11.0-8: ) Entering state 8 Reading a token Next token is token ';' (11.9: ) Shifting token ';' (11.9: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (11.0-8: ) $2 = token ';' (11.9: ) -> $$ = nterm stmt (11.0-9: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-9.5: ) $2 = nterm stmt (11.0-9: ) -> $$ = nterm prog (1.1-11.9: ) Entering state 1 Reading a token Next token is token TYPENAME (13.0: ) Shifting token TYPENAME (13.0: ) Entering state 4 Reading a token Next token is token '(' (13.2: ) Shifting token '(' (13.2: ) Entering state 12 Reading a token Next token is token ID (13.3: ) Shifting token ID (13.3: ) Entering state 18 Reading a token Next token is token ')' (13.4: ) Stack 0 Entering state 18 Next token is token ')' (13.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (13.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (13.4: ) Stack 1 Entering state 21 Next token is token ')' (13.4: ) On stack 0, shifting token ')' (13.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (13.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token ';' (13.5: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token ';' (13.5: ) On stack 0, shifting token ';' (13.5: ) Stack 0 now in state 16 On stack 1, shifting token ';' (13.5: ) Stack 1 now in state 23 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 84); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 72); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME (15.0: ) Stack 1 Entering state 23 Reduced stack 1 by rule 11 (line 97); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 85); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 72); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME (15.0: ) Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 104): $1 = token ID (13.3: ) -> $$ = nterm declarator (13.3: ) Reducing stack -1 by rule 14 (line 105): $1 = token '(' (13.2: ) $2 = nterm declarator (13.3: ) $3 = token ')' (13.4: ) -> $$ = nterm declarator (13.2-4: ) Reducing stack -1 by rule 11 (line 97): $1 = token TYPENAME (13.0: ) $2 = nterm declarator (13.2-4: ) $3 = token ';' (13.5: ) -> $$ = nterm decl (13.0-5: ) Reducing stack -1 by rule 4 (line 85): $1 = nterm decl (13.0-5: ) -> $$ = nterm stmt (13.0-5: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (13.3: ) -> $$ = nterm expr (13.3: ) Reducing stack -1 by rule 8 (line 91): $1 = token TYPENAME (13.0: ) $2 = token '(' (13.2: ) $3 = nterm expr (13.3: ) $4 = token ')' (13.4: ) -> $$ = nterm expr (13.0-4: ) Reducing stack -1 by rule 3 (line 84): $1 = nterm expr (13.0-4: ) $2 = token ';' (13.5: ) -> $$ = nterm stmt (13.0-5: ) Reducing stack -1 by rule 2 (line 72): $1 = nterm prog (1.1-11.9: ) $2 = nterm stmt (13.0-5: ) -> $$ = nterm prog (1.1-13.5: ) Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' (15.2: ) Shifting token '(' (15.2: ) Entering state 12 Reading a token Next token is token ID (15.3: ) Shifting token ID (15.3: ) Entering state 18 Reading a token Next token is token ')' (15.4: ) Stack 0 Entering state 18 Next token is token ')' (15.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (15.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (15.4: ) Stack 1 Entering state 21 Next token is token ')' (15.4: ) On stack 0, shifting token ')' (15.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (15.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '=' (15.6: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '=' (15.6: ) On stack 0, shifting token '=' (15.6: ) Stack 0 now in state 14 On stack 1, shifting token '=' (15.6: ) Stack 1 now in state 22 Stack 0 Entering state 14 Reading a token Next token is token ID (15.8: ) Stack 1 Entering state 22 Next token is token ID (15.8: ) On stack 0, shifting token ID (15.8: ) Stack 0 now in state 5 On stack 1, shifting token ID (15.8: ) Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token '+' (15.10: ) Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 90); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token '+' (15.10: ) On stack 0, shifting token '+' (15.10: ) Stack 0 now in state 15 On stack 1, shifting token '+' (15.10: ) Stack 1 now in state 15 Stack 0 Entering state 15 Reading a token Next token is token ID (15.12: ) Stack 1 Entering state 15 Next token is token ID (15.12: ) On stack 0, shifting token ID (15.12: ) Stack 0 now in state 5 On stack 1, shifting token ID (15.12: ) Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 25. Stack 0 Entering state 25 Reduced stack 0 by rule 9 (line 93); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token ';' (15.13: ) Reduced stack 0 by rule 10 (line 94); action deferred. Now in state 8. Stack 0 Entering state 8 Next token is token ';' (15.13: ) Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 90); action deferred. Now in state 25. Stack 1 Entering state 25 Reduced stack 1 by rule 9 (line 93); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token ';' (15.13: ) On stack 0, shifting token ';' (15.13: ) Stack 0 now in state 16 On stack 1, shifting token ';' (15.13: ) Stack 1 now in state 30 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 84); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 72); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME (17.0: ) Stack 1 Entering state 30 Reduced stack 1 by rule 12 (line 99); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 85); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 72); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME (17.0: ) Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 104): $1 = token ID (15.3: ) -> $$ = nterm declarator (15.3: ) Reducing stack -1 by rule 14 (line 105): $1 = token '(' (15.2: ) $2 = nterm declarator (15.3: ) $3 = token ')' (15.4: ) -> $$ = nterm declarator (15.2-4: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.8: ) -> $$ = nterm expr (15.8: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.12: ) -> $$ = nterm expr (15.12: ) Reducing stack -1 by rule 9 (line 93): $1 = nterm expr (15.8: ) $2 = token '+' (15.10: ) $3 = nterm expr (15.12: ) -> $$ = nterm expr (15.8-12: ) Reducing stack -1 by rule 12 (line 99): $1 = token TYPENAME (15.0: ) $2 = nterm declarator (15.2-4: ) $3 = token '=' (15.6: ) $4 = nterm expr (15.8-12: ) $5 = token ';' (15.13: ) -> $$ = nterm decl (15.0-13: ) Reducing stack -1 by rule 4 (line 85): $1 = nterm decl (15.0-13: ) -> $$ = nterm stmt (15.0-13: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.3: ) -> $$ = nterm expr (15.3: ) Reducing stack -1 by rule 8 (line 91): $1 = token TYPENAME (15.0: ) $2 = token '(' (15.2: ) $3 = nterm expr (15.3: ) $4 = token ')' (15.4: ) -> $$ = nterm expr (15.0-4: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.8: ) -> $$ = nterm expr (15.8: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.12: ) -> $$ = nterm expr (15.12: ) Reducing stack -1 by rule 9 (line 93): $1 = nterm expr (15.8: ) $2 = token '+' (15.10: ) $3 = nterm expr (15.12: ) -> $$ = nterm expr (15.8-12: ) Reducing stack -1 by rule 10 (line 94): $1 = nterm expr (15.0-4: ) $2 = token '=' (15.6: ) $3 = nterm expr (15.8-12: ) -> $$ = nterm expr (15.0-12: ) Reducing stack -1 by rule 3 (line 84): $1 = nterm expr (15.0-12: ) $2 = token ';' (15.13: ) -> $$ = nterm stmt (15.0-13: ) Reducing stack -1 by rule 2 (line 72): $1 = nterm prog (1.1-13.5: ) $2 = nterm stmt (15.0-13: ) -> $$ = nterm prog (1.1-15.13: ) Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' (17.2: ) Shifting token '(' (17.2: ) Entering state 12 Reading a token Next token is token ID (17.3: ) Shifting token ID (17.3: ) Entering state 18 Reading a token Next token is token ID (17.5: ) Reducing stack 0 by rule 7 (line 90): $1 = token ID (17.3: ) -> $$ = nterm expr (17.3: ) Entering state 20 Next token is token ID (17.5: ) 17.5: syntax error Error: popping nterm expr (17.3: ) Error: popping token '(' (17.2: ) Error: popping token TYPENAME (17.0: ) Shifting token error (17.0-5: ) Entering state 3 Next token is token ID (17.5: ) Error: discarding token ID (17.5: ) Reading a token Next token is token ')' (17.6: ) Error: discarding token ')' (17.6: ) Reading a token Next token is token '=' (17.8: ) Error: discarding token '=' (17.8: ) Reading a token Next token is token ID (17.10: ) Error: discarding token ID (17.10: ) Reading a token Next token is token '+' (17.12: ) Error: discarding token '+' (17.12: ) Reading a token Next token is token ID (17.14: ) Error: discarding token ID (17.14: ) Reading a token Next token is token ';' (17.15: ) Entering state 3 Next token is token ';' (17.15: ) Shifting token ';' (17.15: ) Entering state 10 Reducing stack 0 by rule 5 (line 86): $1 = token error (17.0-14: ) $2 = token ';' (17.15: ) -> $$ = nterm stmt (17.0-15: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-15.13: ) $2 = nterm stmt (17.0-15: ) -> $$ = nterm prog (1.1-17.15: ) Entering state 1 Reading a token Next token is token ID (19.0: ) Shifting token ID (19.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (19.0: ) -> $$ = nterm expr (19.0: ) Entering state 8 Reading a token Next token is token '+' (19.2: ) Shifting token '+' (19.2: ) Entering state 15 Reading a token Next token is token ID (19.4: ) Shifting token ID (19.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (19.4: ) -> $$ = nterm expr (19.4: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (19.0: ) $2 = token '+' (19.2: ) $3 = nterm expr (19.4: ) -> $$ = nterm expr (19.0-4: ) Entering state 8 Reading a token Next token is token ';' (19.5: ) Shifting token ';' (19.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (19.0-4: ) $2 = token ';' (19.5: ) -> $$ = nterm stmt (19.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-17.15: ) $2 = nterm stmt (19.0-5: ) -> $$ = nterm prog (1.1-19.5: ) Entering state 1 Reading a token Next token is token '@' (21.0: ) Shifting token '@' (21.0: ) Entering state 6 Reducing stack 0 by rule 6 (line 87): $1 = token '@' (21.0: ) Cleanup: popping nterm prog (1.1-19.5: ) 710. cxx-type.at:438: ok 712. cxx-type.at:449: testing GLR: Merge conflicting parses, pure, locations ... ./cxx-type.at:450: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o types.c types.y stderr: types.y:87.8-37: warning: unset value: $$ [-Wother] types.y: warning: 1 reduce/reduce conflict [-Wconflicts-rr] types.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples ./cxx-type.at:450: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o types types.c $LIBS stderr: stdout: ./c++.at:1555: $PREPARSER ./test stderr: ./c++.at:1555: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ======== Testing with C++ standard flags: '' ./c++.at:1555: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o check check.cc $LIBS stderr: stdout: ./c++.at:1555: ./check ./c++.at:1555: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -fcaret -o test.cc test.y ./c++.at:1555: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o test test.cc $LIBS stderr: stdout: ./cxx-type.at:447: $PREPARSER ./types test-input stderr: syntax error ./cxx-type.at:447: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./cxx-type.at:447: $PREPARSER ./types -p test-input stderr: Starting parse Entering state 0 Reducing stack 0 by rule 1 (line 64): -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token ';' () Shifting token ';' () Entering state 23 Reducing stack 0 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token '=' () Shifting token '=' () Entering state 22 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 29 Reading a token Next token is token ';' () Shifting token ';' () Entering state 30 Reducing stack 0 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 14 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 24 Reading a token Next token is token ';' () Reducing stack 0 by rule 10 (line 84): $1 = nterm expr () $2 = token '=' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '+' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '+' () Stack 1 dies. Removing dead stacks. On stack 0, shifting token '+' () Stack 0 now in state 15 Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Returning to deterministic operation. Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token ';' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 23 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 23 Reduced stack 1 by rule 11 (line 87); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Reducing stack -1 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '=' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '=' () On stack 0, shifting token '=' () Stack 0 now in state 14 On stack 1, shifting token '=' () Stack 1 now in state 22 Stack 0 Entering state 14 Reading a token Next token is token ID () Stack 1 Entering state 22 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token '+' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token '+' () On stack 0, shifting token '+' () Stack 0 now in state 15 On stack 1, shifting token '+' () Stack 1 now in state 15 Stack 0 Entering state 15 Reading a token Next token is token ID () Stack 1 Entering state 15 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 25. Stack 0 Entering state 25 Reduced stack 0 by rule 9 (line 83); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token ';' () Reduced stack 0 by rule 10 (line 84); action deferred. Now in state 8. Stack 0 Entering state 8 Next token is token ';' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 25. Stack 1 Entering state 25 Reduced stack 1 by rule 9 (line 83); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 30 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 30 Reduced stack 1 by rule 12 (line 89); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 10 (line 84): $1 = nterm expr () $2 = token '=' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ID () Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 20 Next token is token ID () syntax error Error: popping nterm expr () Error: popping token '(' () Error: popping token TYPENAME () Shifting token error () Entering state 3 Next token is token ID () Error: discarding token ID () Reading a token Next token is token ')' () Error: discarding token ')' () Reading a token Next token is token '=' () Error: discarding token '=' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token ';' () Entering state 3 Next token is token ';' () Shifting token ';' () Entering state 10 Reducing stack 0 by rule 5 (line 76): $1 = token error () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token '@' () Shifting token '@' () Entering state 6 Reducing stack 0 by rule 6 (line 77): $1 = token '@' () Cleanup: popping nterm prog () ./cxx-type.at:447: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reducing stack 0 by rule 1 (line 64): -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token ';' () Shifting token ';' () Entering state 23 Reducing stack 0 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token '=' () Shifting token '=' () Entering state 22 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 29 Reading a token Next token is token ';' () Shifting token ';' () Entering state 30 Reducing stack 0 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 14 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 24 Reading a token Next token is token ';' () Reducing stack 0 by rule 10 (line 84): $1 = nterm expr () $2 = token '=' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '+' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '+' () Stack 1 dies. Removing dead stacks. On stack 0, shifting token '+' () Stack 0 now in state 15 Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Returning to deterministic operation. Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token ';' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 23 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 23 Reduced stack 1 by rule 11 (line 87); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Reducing stack -1 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '=' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '=' () On stack 0, shifting token '=' () Stack 0 now in state 14 On stack 1, shifting token '=' () Stack 1 now in state 22 Stack 0 Entering state 14 Reading a token Next token is token ID () Stack 1 Entering state 22 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token '+' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token '+' () On stack 0, shifting token '+' () Stack 0 now in state 15 On stack 1, shifting token '+' () Stack 1 now in state 15 Stack 0 Entering state 15 Reading a token Next token is token ID () Stack 1 Entering state 15 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 25. Stack 0 Entering state 25 Reduced stack 0 by rule 9 (line 83); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token ';' () Reduced stack 0 by rule 10 (line 84); action deferred. Now in state 8. Stack 0 Entering state 8 Next token is token ';' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 25. Stack 1 Entering state 25 Reduced stack 1 by rule 9 (line 83); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 30 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 30 Reduced stack 1 by rule 12 (line 89); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 10 (line 84): $1 = nterm expr () $2 = token '=' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ID () Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 20 Next token is token ID () syntax error Error: popping nterm expr () Error: popping token '(' () Error: popping token TYPENAME () Shifting token error () Entering state 3 Next token is token ID () Error: discarding token ID () Reading a token Next token is token ')' () Error: discarding token ')' () Reading a token Next token is token '=' () Error: discarding token '=' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token ';' () Entering state 3 Next token is token ';' () Shifting token ';' () Entering state 10 Reducing stack 0 by rule 5 (line 76): $1 = token error () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token '@' () Shifting token '@' () Entering state 6 Reducing stack 0 by rule 6 (line 77): $1 = token '@' () Cleanup: popping nterm prog () 711. cxx-type.at:444: ok 713. cxx-type.at:455: testing GLR: Verbose messages, resolve ambiguity, impure, no locations ... ./cxx-type.at:456: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o types.c types.y stderr: types.y:77.8-37: warning: unset value: $$ [-Wother] types.y: warning: 1 reduce/reduce conflict [-Wconflicts-rr] types.y: note: rerun with option '-Wcounterexamples' to generate conflict counterexamples ./cxx-type.at:456: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o types types.c $LIBS stderr: stdout: ./cxx-type.at:452: $PREPARSER ./types test-input stderr: 17.5: syntax error ./cxx-type.at:452: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./cxx-type.at:452: $PREPARSER ./types -p test-input stderr: Starting parse Entering state 0 Reducing stack 0 by rule 1 (line 71): -> $$ = nterm prog (1.1: ) Entering state 1 Reading a token Next token is token ID (3.0: ) Shifting token ID (3.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (3.0: ) -> $$ = nterm expr (3.0: ) Entering state 8 Reading a token Next token is token '+' (3.2: ) Shifting token '+' (3.2: ) Entering state 15 Reading a token Next token is token ID (3.4: ) Shifting token ID (3.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (3.4: ) -> $$ = nterm expr (3.4: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (3.0: ) $2 = token '+' (3.2: ) $3 = nterm expr (3.4: ) -> $$ = nterm expr (3.0-4: ) Entering state 8 Reading a token Next token is token ';' (3.5: ) Shifting token ';' (3.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (3.0-4: ) $2 = token ';' (3.5: ) -> $$ = nterm stmt (3.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1: ) $2 = nterm stmt (3.0-5: ) -> $$ = nterm prog (1.1-3.5: ) Entering state 1 Reading a token Next token is token TYPENAME (5.0: ) Shifting token TYPENAME (5.0: ) Entering state 4 Reading a token Next token is token ID (5.2: ) Shifting token ID (5.2: ) Entering state 11 Reducing stack 0 by rule 13 (line 104): $1 = token ID (5.2: ) -> $$ = nterm declarator (5.2: ) Entering state 13 Reading a token Next token is token ';' (5.3: ) Shifting token ';' (5.3: ) Entering state 23 Reducing stack 0 by rule 11 (line 97): $1 = token TYPENAME (5.0: ) $2 = nterm declarator (5.2: ) $3 = token ';' (5.3: ) -> $$ = nterm decl (5.0-3: ) Entering state 9 Reducing stack 0 by rule 4 (line 85): $1 = nterm decl (5.0-3: ) -> $$ = nterm stmt (5.0-3: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-3.5: ) $2 = nterm stmt (5.0-3: ) -> $$ = nterm prog (1.1-5.3: ) Entering state 1 Reading a token Next token is token TYPENAME (7.0: ) Shifting token TYPENAME (7.0: ) Entering state 4 Reading a token Next token is token ID (7.2: ) Shifting token ID (7.2: ) Entering state 11 Reducing stack 0 by rule 13 (line 104): $1 = token ID (7.2: ) -> $$ = nterm declarator (7.2: ) Entering state 13 Reading a token Next token is token '=' (7.4: ) Shifting token '=' (7.4: ) Entering state 22 Reading a token Next token is token ID (7.6: ) Shifting token ID (7.6: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (7.6: ) -> $$ = nterm expr (7.6: ) Entering state 29 Reading a token Next token is token ';' (7.7: ) Shifting token ';' (7.7: ) Entering state 30 Reducing stack 0 by rule 12 (line 99): $1 = token TYPENAME (7.0: ) $2 = nterm declarator (7.2: ) $3 = token '=' (7.4: ) $4 = nterm expr (7.6: ) $5 = token ';' (7.7: ) -> $$ = nterm decl (7.0-7: ) Entering state 9 Reducing stack 0 by rule 4 (line 85): $1 = nterm decl (7.0-7: ) -> $$ = nterm stmt (7.0-7: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-5.3: ) $2 = nterm stmt (7.0-7: ) -> $$ = nterm prog (1.1-7.7: ) Entering state 1 Reading a token Next token is token ID (9.0: ) Shifting token ID (9.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (9.0: ) -> $$ = nterm expr (9.0: ) Entering state 8 Reading a token Next token is token '=' (9.2: ) Shifting token '=' (9.2: ) Entering state 14 Reading a token Next token is token ID (9.4: ) Shifting token ID (9.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (9.4: ) -> $$ = nterm expr (9.4: ) Entering state 24 Reading a token Next token is token ';' (9.5: ) Reducing stack 0 by rule 10 (line 94): $1 = nterm expr (9.0: ) $2 = token '=' (9.2: ) $3 = nterm expr (9.4: ) -> $$ = nterm expr (9.0-4: ) Entering state 8 Next token is token ';' (9.5: ) Shifting token ';' (9.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (9.0-4: ) $2 = token ';' (9.5: ) -> $$ = nterm stmt (9.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-7.7: ) $2 = nterm stmt (9.0-5: ) -> $$ = nterm prog (1.1-9.5: ) Entering state 1 Reading a token Next token is token TYPENAME (11.0: ) Shifting token TYPENAME (11.0: ) Entering state 4 Reading a token Next token is token '(' (11.2: ) Shifting token '(' (11.2: ) Entering state 12 Reading a token Next token is token ID (11.3: ) Shifting token ID (11.3: ) Entering state 18 Reading a token Next token is token ')' (11.4: ) Stack 0 Entering state 18 Next token is token ')' (11.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (11.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (11.4: ) Stack 1 Entering state 21 Next token is token ')' (11.4: ) On stack 0, shifting token ')' (11.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (11.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '+' (11.6: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '+' (11.6: ) Stack 1 dies. Removing dead stacks. On stack 0, shifting token '+' (11.6: ) Stack 0 now in state 15 Reducing stack -1 by rule 7 (line 90): $1 = token ID (11.3: ) -> $$ = nterm expr (11.3: ) Reducing stack -1 by rule 8 (line 91): $1 = token TYPENAME (11.0: ) $2 = token '(' (11.2: ) $3 = nterm expr (11.3: ) $4 = token ')' (11.4: ) -> $$ = nterm expr (11.0-4: ) Returning to deterministic operation. Entering state 15 Reading a token Next token is token ID (11.8: ) Shifting token ID (11.8: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (11.8: ) -> $$ = nterm expr (11.8: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (11.0-4: ) $2 = token '+' (11.6: ) $3 = nterm expr (11.8: ) -> $$ = nterm expr (11.0-8: ) Entering state 8 Reading a token Next token is token ';' (11.9: ) Shifting token ';' (11.9: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (11.0-8: ) $2 = token ';' (11.9: ) -> $$ = nterm stmt (11.0-9: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-9.5: ) $2 = nterm stmt (11.0-9: ) -> $$ = nterm prog (1.1-11.9: ) Entering state 1 Reading a token Next token is token TYPENAME (13.0: ) Shifting token TYPENAME (13.0: ) Entering state 4 Reading a token Next token is token '(' (13.2: ) Shifting token '(' (13.2: ) Entering state 12 Reading a token Next token is token ID (13.3: ) Shifting token ID (13.3: ) Entering state 18 Reading a token Next token is token ')' (13.4: ) Stack 0 Entering state 18 Next token is token ')' (13.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (13.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (13.4: ) Stack 1 Entering state 21 Next token is token ')' (13.4: ) On stack 0, shifting token ')' (13.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (13.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token ';' (13.5: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token ';' (13.5: ) On stack 0, shifting token ';' (13.5: ) Stack 0 now in state 16 On stack 1, shifting token ';' (13.5: ) Stack 1 now in state 23 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 84); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 72); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME (15.0: ) Stack 1 Entering state 23 Reduced stack 1 by rule 11 (line 97); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 85); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 72); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME (15.0: ) Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 104): $1 = token ID (13.3: ) -> $$ = nterm declarator (13.3: ) Reducing stack -1 by rule 14 (line 105): $1 = token '(' (13.2: ) $2 = nterm declarator (13.3: ) $3 = token ')' (13.4: ) -> $$ = nterm declarator (13.2-4: ) Reducing stack -1 by rule 11 (line 97): $1 = token TYPENAME (13.0: ) $2 = nterm declarator (13.2-4: ) $3 = token ';' (13.5: ) -> $$ = nterm decl (13.0-5: ) Reducing stack -1 by rule 4 (line 85): $1 = nterm decl (13.0-5: ) -> $$ = nterm stmt (13.0-5: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (13.3: ) -> $$ = nterm expr (13.3: ) Reducing stack -1 by rule 8 (line 91): $1 = token TYPENAME (13.0: ) $2 = token '(' (13.2: ) $3 = nterm expr (13.3: ) $4 = token ')' (13.4: ) -> $$ = nterm expr (13.0-4: ) Reducing stack -1 by rule 3 (line 84): $1 = nterm expr (13.0-4: ) $2 = token ';' (13.5: ) -> $$ = nterm stmt (13.0-5: ) Reducing stack -1 by rule 2 (line 72): $1 = nterm prog (1.1-11.9: ) $2 = nterm stmt (13.0-5: ) -> $$ = nterm prog (1.1-13.5: ) Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' (15.2: ) Shifting token '(' (15.2: ) Entering state 12 Reading a token Next token is token ID (15.3: ) Shifting token ID (15.3: ) Entering state 18 Reading a token Next token is token ')' (15.4: ) Stack 0 Entering state 18 Next token is token ')' (15.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (15.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (15.4: ) Stack 1 Entering state 21 Next token is token ')' (15.4: ) On stack 0, shifting token ')' (15.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (15.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '=' (15.6: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '=' (15.6: ) On stack 0, shifting token '=' (15.6: ) Stack 0 now in state 14 On stack 1, shifting token '=' (15.6: ) Stack 1 now in state 22 Stack 0 Entering state 14 Reading a token Next token is token ID (15.8: ) Stack 1 Entering state 22 Next token is token ID (15.8: ) On stack 0, shifting token ID (15.8: ) Stack 0 now in state 5 On stack 1, shifting token ID (15.8: ) Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token '+' (15.10: ) Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 90); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token '+' (15.10: ) On stack 0, shifting token '+' (15.10: ) Stack 0 now in state 15 On stack 1, shifting token '+' (15.10: ) Stack 1 now in state 15 Stack 0 Entering state 15 Reading a token Next token is token ID (15.12: ) Stack 1 Entering state 15 Next token is token ID (15.12: ) On stack 0, shifting token ID (15.12: ) Stack 0 now in state 5 On stack 1, shifting token ID (15.12: ) Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 25. Stack 0 Entering state 25 Reduced stack 0 by rule 9 (line 93); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token ';' (15.13: ) Reduced stack 0 by rule 10 (line 94); action deferred. Now in state 8. Stack 0 Entering state 8 Next token is token ';' (15.13: ) Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 90); action deferred. Now in state 25. Stack 1 Entering state 25 Reduced stack 1 by rule 9 (line 93); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token ';' (15.13: ) On stack 0, shifting token ';' (15.13: ) Stack 0 now in state 16 On stack 1, shifting token ';' (15.13: ) Stack 1 now in state 30 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 84); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 72); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME (17.0: ) Stack 1 Entering state 30 Reduced stack 1 by rule 12 (line 99); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 85); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 72); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME (17.0: ) Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 104): $1 = token ID (15.3: ) -> $$ = nterm declarator (15.3: ) Reducing stack -1 by rule 14 (line 105): $1 = token '(' (15.2: ) $2 = nterm declarator (15.3: ) $3 = token ')' (15.4: ) -> $$ = nterm declarator (15.2-4: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.8: ) -> $$ = nterm expr (15.8: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.12: ) -> $$ = nterm expr (15.12: ) Reducing stack -1 by rule 9 (line 93): $1 = nterm expr (15.8: ) $2 = token '+' (15.10: ) $3 = nterm expr (15.12: ) -> $$ = nterm expr (15.8-12: ) Reducing stack -1 by rule 12 (line 99): $1 = token TYPENAME (15.0: ) $2 = nterm declarator (15.2-4: ) $3 = token '=' (15.6: ) $4 = nterm expr (15.8-12: ) $5 = token ';' (15.13: ) -> $$ = nterm decl (15.0-13: ) Reducing stack -1 by rule 4 (line 85): $1 = nterm decl (15.0-13: ) -> $$ = nterm stmt (15.0-13: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.3: ) -> $$ = nterm expr (15.3: ) Reducing stack -1 by rule 8 (line 91): $1 = token TYPENAME (15.0: ) $2 = token '(' (15.2: ) $3 = nterm expr (15.3: ) $4 = token ')' (15.4: ) -> $$ = nterm expr (15.0-4: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.8: ) -> $$ = nterm expr (15.8: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.12: ) -> $$ = nterm expr (15.12: ) Reducing stack -1 by rule 9 (line 93): $1 = nterm expr (15.8: ) $2 = token '+' (15.10: ) $3 = nterm expr (15.12: ) -> $$ = nterm expr (15.8-12: ) Reducing stack -1 by rule 10 (line 94): $1 = nterm expr (15.0-4: ) $2 = token '=' (15.6: ) $3 = nterm expr (15.8-12: ) -> $$ = nterm expr (15.0-12: ) Reducing stack -1 by rule 3 (line 84): $1 = nterm expr (15.0-12: ) $2 = token ';' (15.13: ) -> $$ = nterm stmt (15.0-13: ) Reducing stack -1 by rule 2 (line 72): $1 = nterm prog (1.1-13.5: ) $2 = nterm stmt (15.0-13: ) -> $$ = nterm prog (1.1-15.13: ) Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' (17.2: ) Shifting token '(' (17.2: ) Entering state 12 Reading a token Next token is token ID (17.3: ) Shifting token ID (17.3: ) Entering state 18 Reading a token Next token is token ID (17.5: ) Reducing stack 0 by rule 7 (line 90): $1 = token ID (17.3: ) -> $$ = nterm expr (17.3: ) Entering state 20 Next token is token ID (17.5: ) 17.5: syntax error Error: popping nterm expr (17.3: ) Error: popping token '(' (17.2: ) Error: popping token TYPENAME (17.0: ) Shifting token error (17.0-5: ) Entering state 3 Next token is token ID (17.5: ) Error: discarding token ID (17.5: ) Reading a token Next token is token ')' (17.6: ) Error: discarding token ')' (17.6: ) Reading a token Next token is token '=' (17.8: ) Error: discarding token '=' (17.8: ) Reading a token Next token is token ID (17.10: ) Error: discarding token ID (17.10: ) Reading a token Next token is token '+' (17.12: ) Error: discarding token '+' (17.12: ) Reading a token Next token is token ID (17.14: ) Error: discarding token ID (17.14: ) Reading a token Next token is token ';' (17.15: ) Entering state 3 Next token is token ';' (17.15: ) Shifting token ';' (17.15: ) Entering state 10 Reducing stack 0 by rule 5 (line 86): $1 = token error (17.0-14: ) $2 = token ';' (17.15: ) -> $$ = nterm stmt (17.0-15: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-15.13: ) $2 = nterm stmt (17.0-15: ) -> $$ = nterm prog (1.1-17.15: ) Entering state 1 Reading a token Next token is token ID (19.0: ) Shifting token ID (19.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (19.0: ) -> $$ = nterm expr (19.0: ) Entering state 8 Reading a token Next token is token '+' (19.2: ) Shifting token '+' (19.2: ) Entering state 15 Reading a token Next token is token ID (19.4: ) Shifting token ID (19.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (19.4: ) -> $$ = nterm expr (19.4: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (19.0: ) $2 = token '+' (19.2: ) $3 = nterm expr (19.4: ) -> $$ = nterm expr (19.0-4: ) Entering state 8 Reading a token Next token is token ';' (19.5: ) Shifting token ';' (19.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (19.0-4: ) $2 = token ';' (19.5: ) -> $$ = nterm stmt (19.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-17.15: ) $2 = nterm stmt (19.0-5: ) -> $$ = nterm prog (1.1-19.5: ) Entering state 1 Reading a token Next token is token '@' (21.0: ) Shifting token '@' (21.0: ) Entering state 6 Reducing stack 0 by rule 6 (line 87): $1 = token '@' (21.0: ) Cleanup: popping nterm prog (1.1-19.5: ) ./cxx-type.at:452: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reducing stack 0 by rule 1 (line 71): -> $$ = nterm prog (1.1: ) Entering state 1 Reading a token Next token is token ID (3.0: ) Shifting token ID (3.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (3.0: ) -> $$ = nterm expr (3.0: ) Entering state 8 Reading a token Next token is token '+' (3.2: ) Shifting token '+' (3.2: ) Entering state 15 Reading a token Next token is token ID (3.4: ) Shifting token ID (3.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (3.4: ) -> $$ = nterm expr (3.4: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (3.0: ) $2 = token '+' (3.2: ) $3 = nterm expr (3.4: ) -> $$ = nterm expr (3.0-4: ) Entering state 8 Reading a token Next token is token ';' (3.5: ) Shifting token ';' (3.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (3.0-4: ) $2 = token ';' (3.5: ) -> $$ = nterm stmt (3.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1: ) $2 = nterm stmt (3.0-5: ) -> $$ = nterm prog (1.1-3.5: ) Entering state 1 Reading a token Next token is token TYPENAME (5.0: ) Shifting token TYPENAME (5.0: ) Entering state 4 Reading a token Next token is token ID (5.2: ) Shifting token ID (5.2: ) Entering state 11 Reducing stack 0 by rule 13 (line 104): $1 = token ID (5.2: ) -> $$ = nterm declarator (5.2: ) Entering state 13 Reading a token Next token is token ';' (5.3: ) Shifting token ';' (5.3: ) Entering state 23 Reducing stack 0 by rule 11 (line 97): $1 = token TYPENAME (5.0: ) $2 = nterm declarator (5.2: ) $3 = token ';' (5.3: ) -> $$ = nterm decl (5.0-3: ) Entering state 9 Reducing stack 0 by rule 4 (line 85): $1 = nterm decl (5.0-3: ) -> $$ = nterm stmt (5.0-3: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-3.5: ) $2 = nterm stmt (5.0-3: ) -> $$ = nterm prog (1.1-5.3: ) Entering state 1 Reading a token Next token is token TYPENAME (7.0: ) Shifting token TYPENAME (7.0: ) Entering state 4 Reading a token Next token is token ID (7.2: ) Shifting token ID (7.2: ) Entering state 11 Reducing stack 0 by rule 13 (line 104): $1 = token ID (7.2: ) -> $$ = nterm declarator (7.2: ) Entering state 13 Reading a token Next token is token '=' (7.4: ) Shifting token '=' (7.4: ) Entering state 22 Reading a token Next token is token ID (7.6: ) Shifting token ID (7.6: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (7.6: ) -> $$ = nterm expr (7.6: ) Entering state 29 Reading a token Next token is token ';' (7.7: ) Shifting token ';' (7.7: ) Entering state 30 Reducing stack 0 by rule 12 (line 99): $1 = token TYPENAME (7.0: ) $2 = nterm declarator (7.2: ) $3 = token '=' (7.4: ) $4 = nterm expr (7.6: ) $5 = token ';' (7.7: ) -> $$ = nterm decl (7.0-7: ) Entering state 9 Reducing stack 0 by rule 4 (line 85): $1 = nterm decl (7.0-7: ) -> $$ = nterm stmt (7.0-7: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-5.3: ) $2 = nterm stmt (7.0-7: ) -> $$ = nterm prog (1.1-7.7: ) Entering state 1 Reading a token Next token is token ID (9.0: ) Shifting token ID (9.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (9.0: ) -> $$ = nterm expr (9.0: ) Entering state 8 Reading a token Next token is token '=' (9.2: ) Shifting token '=' (9.2: ) Entering state 14 Reading a token Next token is token ID (9.4: ) Shifting token ID (9.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (9.4: ) -> $$ = nterm expr (9.4: ) Entering state 24 Reading a token Next token is token ';' (9.5: ) Reducing stack 0 by rule 10 (line 94): $1 = nterm expr (9.0: ) $2 = token '=' (9.2: ) $3 = nterm expr (9.4: ) -> $$ = nterm expr (9.0-4: ) Entering state 8 Next token is token ';' (9.5: ) Shifting token ';' (9.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (9.0-4: ) $2 = token ';' (9.5: ) -> $$ = nterm stmt (9.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-7.7: ) $2 = nterm stmt (9.0-5: ) -> $$ = nterm prog (1.1-9.5: ) Entering state 1 Reading a token Next token is token TYPENAME (11.0: ) Shifting token TYPENAME (11.0: ) Entering state 4 Reading a token Next token is token '(' (11.2: ) Shifting token '(' (11.2: ) Entering state 12 Reading a token Next token is token ID (11.3: ) Shifting token ID (11.3: ) Entering state 18 Reading a token Next token is token ')' (11.4: ) Stack 0 Entering state 18 Next token is token ')' (11.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (11.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (11.4: ) Stack 1 Entering state 21 Next token is token ')' (11.4: ) On stack 0, shifting token ')' (11.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (11.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '+' (11.6: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '+' (11.6: ) Stack 1 dies. Removing dead stacks. On stack 0, shifting token '+' (11.6: ) Stack 0 now in state 15 Reducing stack -1 by rule 7 (line 90): $1 = token ID (11.3: ) -> $$ = nterm expr (11.3: ) Reducing stack -1 by rule 8 (line 91): $1 = token TYPENAME (11.0: ) $2 = token '(' (11.2: ) $3 = nterm expr (11.3: ) $4 = token ')' (11.4: ) -> $$ = nterm expr (11.0-4: ) Returning to deterministic operation. Entering state 15 Reading a token Next token is token ID (11.8: ) Shifting token ID (11.8: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (11.8: ) -> $$ = nterm expr (11.8: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (11.0-4: ) $2 = token '+' (11.6: ) $3 = nterm expr (11.8: ) -> $$ = nterm expr (11.0-8: ) Entering state 8 Reading a token Next token is token ';' (11.9: ) Shifting token ';' (11.9: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (11.0-8: ) $2 = token ';' (11.9: ) -> $$ = nterm stmt (11.0-9: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-9.5: ) $2 = nterm stmt (11.0-9: ) -> $$ = nterm prog (1.1-11.9: ) Entering state 1 Reading a token Next token is token TYPENAME (13.0: ) Shifting token TYPENAME (13.0: ) Entering state 4 Reading a token Next token is token '(' (13.2: ) Shifting token '(' (13.2: ) Entering state 12 Reading a token Next token is token ID (13.3: ) Shifting token ID (13.3: ) Entering state 18 Reading a token Next token is token ')' (13.4: ) Stack 0 Entering state 18 Next token is token ')' (13.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (13.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (13.4: ) Stack 1 Entering state 21 Next token is token ')' (13.4: ) On stack 0, shifting token ')' (13.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (13.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token ';' (13.5: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token ';' (13.5: ) On stack 0, shifting token ';' (13.5: ) Stack 0 now in state 16 On stack 1, shifting token ';' (13.5: ) Stack 1 now in state 23 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 84); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 72); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME (15.0: ) Stack 1 Entering state 23 Reduced stack 1 by rule 11 (line 97); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 85); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 72); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME (15.0: ) Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 104): $1 = token ID (13.3: ) -> $$ = nterm declarator (13.3: ) Reducing stack -1 by rule 14 (line 105): $1 = token '(' (13.2: ) $2 = nterm declarator (13.3: ) $3 = token ')' (13.4: ) -> $$ = nterm declarator (13.2-4: ) Reducing stack -1 by rule 11 (line 97): $1 = token TYPENAME (13.0: ) $2 = nterm declarator (13.2-4: ) $3 = token ';' (13.5: ) -> $$ = nterm decl (13.0-5: ) Reducing stack -1 by rule 4 (line 85): $1 = nterm decl (13.0-5: ) -> $$ = nterm stmt (13.0-5: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (13.3: ) -> $$ = nterm expr (13.3: ) Reducing stack -1 by rule 8 (line 91): $1 = token TYPENAME (13.0: ) $2 = token '(' (13.2: ) $3 = nterm expr (13.3: ) $4 = token ')' (13.4: ) -> $$ = nterm expr (13.0-4: ) Reducing stack -1 by rule 3 (line 84): $1 = nterm expr (13.0-4: ) $2 = token ';' (13.5: ) -> $$ = nterm stmt (13.0-5: ) Reducing stack -1 by rule 2 (line 72): $1 = nterm prog (1.1-11.9: ) $2 = nterm stmt (13.0-5: ) -> $$ = nterm prog (1.1-13.5: ) Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' (15.2: ) Shifting token '(' (15.2: ) Entering state 12 Reading a token Next token is token ID (15.3: ) Shifting token ID (15.3: ) Entering state 18 Reading a token Next token is token ')' (15.4: ) Stack 0 Entering state 18 Next token is token ')' (15.4: ) Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 104); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' (15.4: ) Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' (15.4: ) Stack 1 Entering state 21 Next token is token ')' (15.4: ) On stack 0, shifting token ')' (15.4: ) Stack 0 now in state 27 On stack 1, shifting token ')' (15.4: ) Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 91); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '=' (15.6: ) Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 105); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '=' (15.6: ) On stack 0, shifting token '=' (15.6: ) Stack 0 now in state 14 On stack 1, shifting token '=' (15.6: ) Stack 1 now in state 22 Stack 0 Entering state 14 Reading a token Next token is token ID (15.8: ) Stack 1 Entering state 22 Next token is token ID (15.8: ) On stack 0, shifting token ID (15.8: ) Stack 0 now in state 5 On stack 1, shifting token ID (15.8: ) Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token '+' (15.10: ) Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 90); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token '+' (15.10: ) On stack 0, shifting token '+' (15.10: ) Stack 0 now in state 15 On stack 1, shifting token '+' (15.10: ) Stack 1 now in state 15 Stack 0 Entering state 15 Reading a token Next token is token ID (15.12: ) Stack 1 Entering state 15 Next token is token ID (15.12: ) On stack 0, shifting token ID (15.12: ) Stack 0 now in state 5 On stack 1, shifting token ID (15.12: ) Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 90); action deferred. Now in state 25. Stack 0 Entering state 25 Reduced stack 0 by rule 9 (line 93); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token ';' (15.13: ) Reduced stack 0 by rule 10 (line 94); action deferred. Now in state 8. Stack 0 Entering state 8 Next token is token ';' (15.13: ) Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 90); action deferred. Now in state 25. Stack 1 Entering state 25 Reduced stack 1 by rule 9 (line 93); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token ';' (15.13: ) On stack 0, shifting token ';' (15.13: ) Stack 0 now in state 16 On stack 1, shifting token ';' (15.13: ) Stack 1 now in state 30 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 84); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 72); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME (17.0: ) Stack 1 Entering state 30 Reduced stack 1 by rule 12 (line 99); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 85); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 72); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME (17.0: ) Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 104): $1 = token ID (15.3: ) -> $$ = nterm declarator (15.3: ) Reducing stack -1 by rule 14 (line 105): $1 = token '(' (15.2: ) $2 = nterm declarator (15.3: ) $3 = token ')' (15.4: ) -> $$ = nterm declarator (15.2-4: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.8: ) -> $$ = nterm expr (15.8: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.12: ) -> $$ = nterm expr (15.12: ) Reducing stack -1 by rule 9 (line 93): $1 = nterm expr (15.8: ) $2 = token '+' (15.10: ) $3 = nterm expr (15.12: ) -> $$ = nterm expr (15.8-12: ) Reducing stack -1 by rule 12 (line 99): $1 = token TYPENAME (15.0: ) $2 = nterm declarator (15.2-4: ) $3 = token '=' (15.6: ) $4 = nterm expr (15.8-12: ) $5 = token ';' (15.13: ) -> $$ = nterm decl (15.0-13: ) Reducing stack -1 by rule 4 (line 85): $1 = nterm decl (15.0-13: ) -> $$ = nterm stmt (15.0-13: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.3: ) -> $$ = nterm expr (15.3: ) Reducing stack -1 by rule 8 (line 91): $1 = token TYPENAME (15.0: ) $2 = token '(' (15.2: ) $3 = nterm expr (15.3: ) $4 = token ')' (15.4: ) -> $$ = nterm expr (15.0-4: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.8: ) -> $$ = nterm expr (15.8: ) Reducing stack -1 by rule 7 (line 90): $1 = token ID (15.12: ) -> $$ = nterm expr (15.12: ) Reducing stack -1 by rule 9 (line 93): $1 = nterm expr (15.8: ) $2 = token '+' (15.10: ) $3 = nterm expr (15.12: ) -> $$ = nterm expr (15.8-12: ) Reducing stack -1 by rule 10 (line 94): $1 = nterm expr (15.0-4: ) $2 = token '=' (15.6: ) $3 = nterm expr (15.8-12: ) -> $$ = nterm expr (15.0-12: ) Reducing stack -1 by rule 3 (line 84): $1 = nterm expr (15.0-12: ) $2 = token ';' (15.13: ) -> $$ = nterm stmt (15.0-13: ) Reducing stack -1 by rule 2 (line 72): $1 = nterm prog (1.1-13.5: ) $2 = nterm stmt (15.0-13: ) -> $$ = nterm prog (1.1-15.13: ) Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' (17.2: ) Shifting token '(' (17.2: ) Entering state 12 Reading a token Next token is token ID (17.3: ) Shifting token ID (17.3: ) Entering state 18 Reading a token Next token is token ID (17.5: ) Reducing stack 0 by rule 7 (line 90): $1 = token ID (17.3: ) -> $$ = nterm expr (17.3: ) Entering state 20 Next token is token ID (17.5: ) 17.5: syntax error Error: popping nterm expr (17.3: ) Error: popping token '(' (17.2: ) Error: popping token TYPENAME (17.0: ) Shifting token error (17.0-5: ) Entering state 3 Next token is token ID (17.5: ) Error: discarding token ID (17.5: ) Reading a token Next token is token ')' (17.6: ) Error: discarding token ')' (17.6: ) Reading a token Next token is token '=' (17.8: ) Error: discarding token '=' (17.8: ) Reading a token Next token is token ID (17.10: ) Error: discarding token ID (17.10: ) Reading a token Next token is token '+' (17.12: ) Error: discarding token '+' (17.12: ) Reading a token Next token is token ID (17.14: ) Error: discarding token ID (17.14: ) Reading a token Next token is token ';' (17.15: ) Entering state 3 Next token is token ';' (17.15: ) Shifting token ';' (17.15: ) Entering state 10 Reducing stack 0 by rule 5 (line 86): $1 = token error (17.0-14: ) $2 = token ';' (17.15: ) -> $$ = nterm stmt (17.0-15: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-15.13: ) $2 = nterm stmt (17.0-15: ) -> $$ = nterm prog (1.1-17.15: ) Entering state 1 Reading a token Next token is token ID (19.0: ) Shifting token ID (19.0: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (19.0: ) -> $$ = nterm expr (19.0: ) Entering state 8 Reading a token Next token is token '+' (19.2: ) Shifting token '+' (19.2: ) Entering state 15 Reading a token Next token is token ID (19.4: ) Shifting token ID (19.4: ) Entering state 5 Reducing stack 0 by rule 7 (line 90): $1 = token ID (19.4: ) -> $$ = nterm expr (19.4: ) Entering state 25 Reducing stack 0 by rule 9 (line 93): $1 = nterm expr (19.0: ) $2 = token '+' (19.2: ) $3 = nterm expr (19.4: ) -> $$ = nterm expr (19.0-4: ) Entering state 8 Reading a token Next token is token ';' (19.5: ) Shifting token ';' (19.5: ) Entering state 16 Reducing stack 0 by rule 3 (line 84): $1 = nterm expr (19.0-4: ) $2 = token ';' (19.5: ) -> $$ = nterm stmt (19.0-5: ) Entering state 7 Reducing stack 0 by rule 2 (line 72): $1 = nterm prog (1.1-17.15: ) $2 = nterm stmt (19.0-5: ) -> $$ = nterm prog (1.1-19.5: ) Entering state 1 Reading a token Next token is token '@' (21.0: ) Shifting token '@' (21.0: ) Entering state 6 Reducing stack 0 by rule 6 (line 87): $1 = token '@' (21.0: ) Cleanup: popping nterm prog (1.1-19.5: ) 712. cxx-type.at:449: ok 714. glr-regression.at:205: testing Badly Collapsed GLR States: glr.c ... ./glr-regression.at:205: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o glr-regr1.c glr-regr1.y ./glr-regression.at:205: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o glr-regr1 glr-regr1.c $LIBS stderr: stdout: ./glr-regression.at:205: $PREPARSER ./glr-regr1 BPBPB stderr: ./glr-regression.at:205: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 714. glr-regression.at:205: ok stderr: stdout: ./cxx-type.at:458: $PREPARSER ./types test-input stderr: syntax error, unexpected ID, expecting '=' or '+' or ')' ./cxx-type.at:458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./cxx-type.at:458: $PREPARSER ./types -p test-input stderr: Starting parse Entering state 0 Reducing stack 0 by rule 1 (line 64): -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token ';' () Shifting token ';' () Entering state 23 Reducing stack 0 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token '=' () Shifting token '=' () Entering state 22 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 29 Reading a token Next token is token ';' () Shifting token ';' () Entering state 30 Reducing stack 0 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 14 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 24 Reading a token Next token is token ';' () Reducing stack 0 by rule 10 (line 84): $1 = nterm expr () $2 = token '=' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '+' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '+' () Stack 1 dies. Removing dead stacks. On stack 0, shifting token '+' () Stack 0 now in state 15 Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Returning to deterministic operation. Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token ';' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 23 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 23 Reduced stack 1 by rule 11 (line 87); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Reducing stack -1 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '=' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '=' () On stack 0, shifting token '=' () Stack 0 now in state 14 On stack 1, shifting token '=' () Stack 1 now in state 22 Stack 0 Entering state 14 Reading a token Next token is token ID () Stack 1 Entering state 22 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token '+' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token '+' () On stack 0, shifting token '+' () Stack 0 now in state 15 On stack 1, shifting token '+' () Stack 1 now in state 15 Stack 0 Entering state 15 Reading a token Next token is token ID () Stack 1 Entering state 15 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 25. Stack 0 Entering state 25 Reduced stack 0 by rule 9 (line 83); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token ';' () Reduced stack 0 by rule 10 (line 84); action deferred. Now in state 8. Stack 0 Entering state 8 Next token is token ';' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 25. Stack 1 Entering state 25 Reduced stack 1 by rule 9 (line 83); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 30 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 30 Reduced stack 1 by rule 12 (line 89); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 10 (line 84): $1 = nterm expr () $2 = token '=' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ID () Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 20 Next token is token ID () syntax error, unexpected ID, expecting '=' or '+' or ')' Error: popping nterm expr () Error: popping token '(' () Error: popping token TYPENAME () Shifting token error () Entering state 3 Next token is token ID () Error: discarding token ID () Reading a token Next token is token ')' () Error: discarding token ')' () Reading a token Next token is token '=' () Error: discarding token '=' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token ';' () Entering state 3 Next token is token ';' () Shifting token ';' () Entering state 10 Reducing stack 0 by rule 5 (line 76): $1 = token error () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token '@' () Shifting token '@' () Entering state 6 Reducing stack 0 by rule 6 (line 77): $1 = token '@' () Cleanup: popping nterm prog () ./cxx-type.at:458: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: Starting parse Entering state 0 Reducing stack 0 by rule 1 (line 64): -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token ';' () Shifting token ';' () Entering state 23 Reducing stack 0 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token ID () Shifting token ID () Entering state 11 Reducing stack 0 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Entering state 13 Reading a token Next token is token '=' () Shifting token '=' () Entering state 22 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 29 Reading a token Next token is token ';' () Shifting token ';' () Entering state 30 Reducing stack 0 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Entering state 9 Reducing stack 0 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '=' () Shifting token '=' () Entering state 14 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 24 Reading a token Next token is token ';' () Reducing stack 0 by rule 10 (line 84): $1 = nterm expr () $2 = token '=' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '+' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '+' () Stack 1 dies. Removing dead stacks. On stack 0, shifting token '+' () Stack 0 now in state 15 Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Returning to deterministic operation. Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token TYPENAME () Shifting token TYPENAME () Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token ';' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 23 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 23 Reduced stack 1 by rule 11 (line 87); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 11 (line 87): $1 = token TYPENAME () $2 = nterm declarator () $3 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Reducing stack -1 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ')' () Stack 0 Entering state 18 Next token is token ')' () Splitting off stack 1 from 0. Reduced stack 1 by rule 13 (line 94); action deferred. Now in state 21. Stack 1 Entering state 21 Next token is token ')' () Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 20. Stack 0 Entering state 20 Next token is token ')' () Stack 1 Entering state 21 Next token is token ')' () On stack 0, shifting token ')' () Stack 0 now in state 27 On stack 1, shifting token ')' () Stack 1 now in state 28 Stack 0 Entering state 27 Reduced stack 0 by rule 8 (line 81); action deferred. Now in state 8. Stack 0 Entering state 8 Reading a token Next token is token '=' () Stack 1 Entering state 28 Reduced stack 1 by rule 14 (line 95); action deferred. Now in state 13. Stack 1 Entering state 13 Next token is token '=' () On stack 0, shifting token '=' () Stack 0 now in state 14 On stack 1, shifting token '=' () Stack 1 now in state 22 Stack 0 Entering state 14 Reading a token Next token is token ID () Stack 1 Entering state 22 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token '+' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token '+' () On stack 0, shifting token '+' () Stack 0 now in state 15 On stack 1, shifting token '+' () Stack 1 now in state 15 Stack 0 Entering state 15 Reading a token Next token is token ID () Stack 1 Entering state 15 Next token is token ID () On stack 0, shifting token ID () Stack 0 now in state 5 On stack 1, shifting token ID () Stack 1 now in state 5 Stack 0 Entering state 5 Reduced stack 0 by rule 7 (line 80); action deferred. Now in state 25. Stack 0 Entering state 25 Reduced stack 0 by rule 9 (line 83); action deferred. Now in state 24. Stack 0 Entering state 24 Reading a token Next token is token ';' () Reduced stack 0 by rule 10 (line 84); action deferred. Now in state 8. Stack 0 Entering state 8 Next token is token ';' () Stack 1 Entering state 5 Reduced stack 1 by rule 7 (line 80); action deferred. Now in state 25. Stack 1 Entering state 25 Reduced stack 1 by rule 9 (line 83); action deferred. Now in state 29. Stack 1 Entering state 29 Next token is token ';' () On stack 0, shifting token ';' () Stack 0 now in state 16 On stack 1, shifting token ';' () Stack 1 now in state 30 Stack 0 Entering state 16 Reduced stack 0 by rule 3 (line 74); action deferred. Now in state 7. Stack 0 Entering state 7 Reduced stack 0 by rule 2 (line 65); action deferred. Now in state 1. Stack 0 Entering state 1 Reading a token Next token is token TYPENAME () Stack 1 Entering state 30 Reduced stack 1 by rule 12 (line 89); action deferred. Now in state 9. Stack 1 Entering state 9 Reduced stack 1 by rule 4 (line 75); action deferred. Now in state 7. Stack 1 Entering state 7 Reduced stack 1 by rule 2 (line 65); action deferred. Now in state 1. Merging stack 1 into stack 0. Removing dead stacks. On stack 0, shifting token TYPENAME () Stack 0 now in state 4 Reducing stack -1 by rule 13 (line 94): $1 = token ID () -> $$ = nterm declarator () Reducing stack -1 by rule 14 (line 95): $1 = token '(' () $2 = nterm declarator () $3 = token ')' () -> $$ = nterm declarator () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 12 (line 89): $1 = token TYPENAME () $2 = nterm declarator () $3 = token '=' () $4 = nterm expr () $5 = token ';' () -> $$ = nterm decl () Reducing stack -1 by rule 4 (line 75): $1 = nterm decl () -> $$ = nterm stmt () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 8 (line 81): $1 = token TYPENAME () $2 = token '(' () $3 = nterm expr () $4 = token ')' () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Reducing stack -1 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 10 (line 84): $1 = nterm expr () $2 = token '=' () $3 = nterm expr () -> $$ = nterm expr () Reducing stack -1 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Reducing stack -1 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Returning to deterministic operation. Entering state 4 Reading a token Next token is token '(' () Shifting token '(' () Entering state 12 Reading a token Next token is token ID () Shifting token ID () Entering state 18 Reading a token Next token is token ID () Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 20 Next token is token ID () syntax error, unexpected ID, expecting '=' or '+' or ')' Error: popping nterm expr () Error: popping token '(' () Error: popping token TYPENAME () Shifting token error () Entering state 3 Next token is token ID () Error: discarding token ID () Reading a token Next token is token ')' () Error: discarding token ')' () Reading a token Next token is token '=' () Error: discarding token '=' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token '+' () Error: discarding token '+' () Reading a token Next token is token ID () Error: discarding token ID () Reading a token Next token is token ';' () Entering state 3 Next token is token ';' () Shifting token ';' () Entering state 10 Reducing stack 0 by rule 5 (line 76): $1 = token error () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token '+' () Shifting token '+' () Entering state 15 Reading a token Next token is token ID () Shifting token ID () Entering state 5 Reducing stack 0 by rule 7 (line 80): $1 = token ID () -> $$ = nterm expr () Entering state 25 Reducing stack 0 by rule 9 (line 83): $1 = nterm expr () $2 = token '+' () $3 = nterm expr () -> $$ = nterm expr () Entering state 8 Reading a token Next token is token ';' () Shifting token ';' () Entering state 16 Reducing stack 0 by rule 3 (line 74): $1 = nterm expr () $2 = token ';' () -> $$ = nterm stmt () Entering state 7 Reducing stack 0 by rule 2 (line 65): $1 = nterm prog () $2 = nterm stmt () -> $$ = nterm prog () Entering state 1 Reading a token Next token is token '@' () Shifting token '@' () Entering state 6 Reducing stack 0 by rule 6 (line 77): $1 = token '@' () Cleanup: popping nterm prog () 713. cxx-type.at:455: ok 715. glr-regression.at:206: testing Badly Collapsed GLR States: glr.cc ... ./glr-regression.at:206: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o glr-regr1.cc glr-regr1.y stderr: stdout: ./c++.at:1555: $PREPARSER ./test stderr: 716. glr-regression.at:207: testing Badly Collapsed GLR States: glr2.cc ... ./glr-regression.at:207: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o glr-regr1.cc glr-regr1.y ./c++.at:1555: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 691. c++.at:1517: ok ./glr-regression.at:206: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o glr-regr1 glr-regr1.cc $LIBS ./glr-regression.at:207: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o glr-regr1 glr-regr1.cc $LIBS 717. glr-regression.at:354: testing Improper handling of embedded actions and dollar(-N) in GLR parsers: glr.c ... ./glr-regression.at:354: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr2a.c glr-regr2a.y ./glr-regression.at:354: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o glr-regr2a glr-regr2a.c $LIBS stderr: stdout: ./glr-regression.at:354: $PREPARSER ./glr-regr2a input1.txt stderr: ./glr-regression.at:354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./glr-regression.at:354: $PREPARSER ./glr-regr2a input2.txt stderr: ./glr-regression.at:354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./glr-regression.at:354: $PREPARSER ./glr-regr2a input3.txt stderr: ./glr-regression.at:354: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 717. glr-regression.at:354: ok 718. glr-regression.at:355: testing Improper handling of embedded actions and dollar(-N) in GLR parsers: glr.cc ... ./glr-regression.at:355: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr2a.cc glr-regr2a.y ./glr-regression.at:355: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o glr-regr2a glr-regr2a.cc $LIBS stderr: stdout: ./glr-regression.at:206: $PREPARSER ./glr-regr1 BPBPB stderr: ./glr-regression.at:206: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 715. glr-regression.at:206: ok 719. glr-regression.at:356: testing Improper handling of embedded actions and dollar(-N) in GLR parsers: glr2.cc ... ./glr-regression.at:356: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr2a.cc glr-regr2a.y ./glr-regression.at:356: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o glr-regr2a glr-regr2a.cc $LIBS stderr: stdout: ./glr-regression.at:355: $PREPARSER ./glr-regr2a input1.txt stderr: ./glr-regression.at:355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./glr-regression.at:355: $PREPARSER ./glr-regr2a input2.txt stderr: ./glr-regression.at:355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./glr-regression.at:355: $PREPARSER ./glr-regr2a input3.txt stderr: ./glr-regression.at:355: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 718. glr-regression.at:355: ok 720. glr-regression.at:488: testing Improper merging of GLR delayed action sets: glr.c ... ./glr-regression.at:488: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr3.c glr-regr3.y ./glr-regression.at:488: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o glr-regr3 glr-regr3.c $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from glr-regr1.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr1.cc:2169:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at glr-regr1.cc:2008:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr1.cc:2431:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr1.cc:2169:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr1.cc:1994:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at glr-regr1.cc:2859:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr1.cc:2169:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr1.cc:1994:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr1.cc:2842:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at glr-regr1.cc:2830:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:207: $PREPARSER ./glr-regr1 BPBPB stderr: ./glr-regression.at:207: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 716. glr-regression.at:207: ok 721. glr-regression.at:489: testing Improper merging of GLR delayed action sets: glr.cc ... ./glr-regression.at:489: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr3.cc glr-regr3.y ./glr-regression.at:489: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o glr-regr3 glr-regr3.cc $LIBS stderr: stdout: ./glr-regression.at:488: $PREPARSER ./glr-regr3 input.txt stderr: ./glr-regression.at:488: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 720. glr-regression.at:488: ok 722. glr-regression.at:490: testing Improper merging of GLR delayed action sets: glr2.cc ... ./glr-regression.at:490: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr3.cc glr-regr3.y ./glr-regression.at:490: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o glr-regr3 glr-regr3.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from glr-regr2a.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr2a.cc:2186:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at glr-regr2a.cc:2025:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr2a.cc:2448:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr2a.cc:2186:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr2a.cc:2011:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, char* const&)' at glr-regr2a.cc:2912:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr2a.cc:2186:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr2a.cc:2011:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr2a.cc:2895:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at glr-regr2a.cc:2883:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:356: $PREPARSER ./glr-regr2a input1.txt stderr: ./glr-regression.at:356: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./glr-regression.at:356: $PREPARSER ./glr-regr2a input2.txt stderr: ./glr-regression.at:356: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./glr-regression.at:356: $PREPARSER ./glr-regr2a input3.txt stderr: ./glr-regression.at:356: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 719. glr-regression.at:356: ok 723. glr-regression.at:592: testing Duplicate representation of merged trees: %union { char *ptr; } glr.c ... ./glr-regression.at:592: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr4.c glr-regr4.y ./glr-regression.at:592: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o glr-regr4 glr-regr4.c $LIBS stderr: stdout: ./glr-regression.at:489: $PREPARSER ./glr-regr3 input.txt stderr: ./glr-regression.at:489: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 721. glr-regression.at:489: ok 724. glr-regression.at:593: testing Duplicate representation of merged trees: %union { char *ptr; } glr.cc ... ./glr-regression.at:593: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr4.cc glr-regr4.y ./glr-regression.at:593: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o glr-regr4 glr-regr4.cc $LIBS stderr: stdout: ./glr-regression.at:592: $PREPARSER ./glr-regr4 stderr: ./glr-regression.at:592: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 723. glr-regression.at:592: ok 725. glr-regression.at:594: testing Duplicate representation of merged trees: %union { char *ptr; } glr2.cc ... ./glr-regression.at:594: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr4.cc glr-regr4.y ./glr-regression.at:594: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o glr-regr4 glr-regr4.cc $LIBS stderr: stdout: ./glr-regression.at:593: $PREPARSER ./glr-regr4 stderr: ./glr-regression.at:593: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 724. glr-regression.at:593: ok 726. glr-regression.at:596: testing Duplicate representation of merged trees: api.value.type=union glr.c ... ./glr-regression.at:596: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr4.c glr-regr4.y ./glr-regression.at:596: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o glr-regr4 glr-regr4.c $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from glr-regr3.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr3.cc:2221:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at glr-regr3.cc:2060:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr3.cc:2483:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr3.cc:2221:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr3.cc:2046:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at glr-regr3.cc:2965:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr3.cc:2221:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr3.cc:2046:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr3.cc:2948:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at glr-regr3.cc:2936:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:490: $PREPARSER ./glr-regr3 input.txt stderr: ./glr-regression.at:490: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 722. glr-regression.at:490: ok 727. glr-regression.at:597: testing Duplicate representation of merged trees: api.value.type=union glr.cc ... ./glr-regression.at:597: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr4.cc glr-regr4.y ./glr-regression.at:597: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o glr-regr4 glr-regr4.cc $LIBS stderr: stdout: ./glr-regression.at:596: $PREPARSER ./glr-regr4 stderr: ./glr-regression.at:596: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 726. glr-regression.at:596: ok 728. glr-regression.at:598: testing Duplicate representation of merged trees: api.value.type=union glr2.cc ... ./glr-regression.at:598: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr4.cc glr-regr4.y ./glr-regression.at:598: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o glr-regr4 glr-regr4.cc $LIBS stderr: stdout: ./glr-regression.at:597: $PREPARSER ./glr-regr4 stderr: ./glr-regression.at:597: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 727. glr-regression.at:597: ok 729. glr-regression.at:670: testing User destructor for unresolved GLR semantic value: glr.c ... ./glr-regression.at:670: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr5.c glr-regr5.y ./glr-regression.at:670: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o glr-regr5 glr-regr5.c $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from glr-regr4.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr4.cc:2181:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at glr-regr4.cc:2020:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr4.cc:2443:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr4.cc:2181:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr4.cc:2006:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at glr-regr4.cc:2901:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr4.cc:2181:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr4.cc:2006:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr4.cc:2884:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at glr-regr4.cc:2872:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:594: $PREPARSER ./glr-regr4 stderr: ./glr-regression.at:594: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 725. glr-regression.at:594: ok 730. glr-regression.at:671: testing User destructor for unresolved GLR semantic value: glr.cc ... ./glr-regression.at:671: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr5.cc glr-regr5.y ./glr-regression.at:671: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o glr-regr5 glr-regr5.cc $LIBS stderr: stdout: ./glr-regression.at:670: $PREPARSER ./glr-regr5 stderr: Ambiguity detected. Option 1, start -> 'a' Option 2, start -> 'a' syntax is ambiguous ./glr-regression.at:670: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 729. glr-regression.at:670: ok 731. glr-regression.at:672: testing User destructor for unresolved GLR semantic value: glr2.cc ... ./glr-regression.at:672: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr5.cc glr-regr5.y ./glr-regression.at:672: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o glr-regr5 glr-regr5.cc $LIBS stderr: stdout: ./glr-regression.at:671: $PREPARSER ./glr-regr5 stderr: Ambiguity detected. Option 1, start -> 'a' Option 2, start -> 'a' syntax is ambiguous ./glr-regression.at:671: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 730. glr-regression.at:671: ok 732. glr-regression.at:738: testing User destructor after an error during a split parse: glr.c ... ./glr-regression.at:738: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr6.c glr-regr6.y ./glr-regression.at:738: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o glr-regr6 glr-regr6.c $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from glr-regr4.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr4.cc:2184:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at glr-regr4.cc:2023:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr4.cc:2446:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr4.cc:2184:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr4.cc:2009:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at glr-regr4.cc:2904:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr4.cc:2184:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr4.cc:2009:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr4.cc:2887:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at glr-regr4.cc:2875:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:598: $PREPARSER ./glr-regr4 stderr: ./glr-regression.at:598: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 728. glr-regression.at:598: ok 733. glr-regression.at:739: testing User destructor after an error during a split parse: glr.cc ... ./glr-regression.at:739: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr6.cc glr-regr6.y ./glr-regression.at:739: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o glr-regr6 glr-regr6.cc $LIBS stderr: stdout: ./glr-regression.at:738: $PREPARSER ./glr-regr6 stderr: Ambiguity detected. Option 1, start -> 'a' Option 2, start -> 'a' syntax is ambiguous ./glr-regression.at:738: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 732. glr-regression.at:738: ok 734. glr-regression.at:740: testing User destructor after an error during a split parse: glr2.cc ... ./glr-regression.at:740: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr6.cc glr-regr6.y ./glr-regression.at:740: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o glr-regr6 glr-regr6.cc $LIBS stderr: stdout: ./glr-regression.at:739: $PREPARSER ./glr-regr6 stderr: Ambiguity detected. Option 1, start -> 'a' Option 2, start -> 'a' syntax is ambiguous ./glr-regression.at:739: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 733. glr-regression.at:739: ok 735. glr-regression.at:843: testing Duplicated user destructor for lookahead: glr.c ... ./glr-regression.at:843: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr7.c glr-regr7.y ./glr-regression.at:843: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o glr-regr7 glr-regr7.c $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from glr-regr5.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr5.cc:2172:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at glr-regr5.cc:2011:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr5.cc:2434:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr5.cc:2172:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr5.cc:1997:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at glr-regr5.cc:2856:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr5.cc:2172:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr5.cc:1997:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr5.cc:2839:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at glr-regr5.cc:2827:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:672: $PREPARSER ./glr-regr5 stderr: Ambiguity detected. Option 1, start -> 'a' Option 2, start -> 'a' syntax is ambiguous ./glr-regression.at:672: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 731. glr-regression.at:672: ok 736. glr-regression.at:844: testing Duplicated user destructor for lookahead: glr.cc ... ./glr-regression.at:844: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr7.cc glr-regr7.y ./glr-regression.at:844: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o glr-regr7 glr-regr7.cc $LIBS stderr: stdout: ./glr-regression.at:843: $PREPARSER ./glr-regr7 stderr: memory exhausted ./glr-regression.at:843: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 735. glr-regression.at:843: ok 737. glr-regression.at:845: testing Duplicated user destructor for lookahead: glr2.cc ... ./glr-regression.at:845: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr7.cc glr-regr7.y ./glr-regression.at:845: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o glr-regr7 glr-regr7.cc $LIBS stderr: stdout: ./glr-regression.at:844: $PREPARSER ./glr-regr7 stderr: memory exhausted ./glr-regression.at:844: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 736. glr-regression.at:844: ok 738. glr-regression.at:944: testing Incorrectly initialized location for empty right-hand side in GLR: glr.c ... ./glr-regression.at:944: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr8.c glr-regr8.y ./glr-regression.at:944: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o glr-regr8 glr-regr8.c $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from glr-regr6.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr6.cc:2171:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at glr-regr6.cc:2010:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr6.cc:2433:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr6.cc:2171:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr6.cc:1996:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at glr-regr6.cc:2843:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr6.cc:2171:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr6.cc:1996:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr6.cc:2826:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at glr-regr6.cc:2814:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:740: $PREPARSER ./glr-regr6 stderr: Ambiguity detected. Option 1, start -> 'a' Option 2, start -> 'a' syntax is ambiguous ./glr-regression.at:740: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 734. glr-regression.at:740: ok 739. glr-regression.at:945: testing Incorrectly initialized location for empty right-hand side in GLR: glr.cc ... ./glr-regression.at:945: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr8.cc glr-regr8.y ./glr-regression.at:945: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o glr-regr8 glr-regr8.cc $LIBS stderr: stdout: ./glr-regression.at:944: $PREPARSER ./glr-regr8 stderr: ./glr-regression.at:944: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 738. glr-regression.at:944: ok 740. glr-regression.at:946: testing Incorrectly initialized location for empty right-hand side in GLR: glr2.cc ... ./glr-regression.at:946: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr8.cc glr-regr8.y ./glr-regression.at:946: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o glr-regr8 glr-regr8.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from glr-regr7.cc:95: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr7.cc:2184:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at glr-regr7.cc:2023:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr7.cc:2446:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr7.cc:2184:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr7.cc:2009:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at glr-regr7.cc:2856:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr7.cc:2184:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr7.cc:2009:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr7.cc:2839:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at glr-regr7.cc:2827:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:845: $PREPARSER ./glr-regr7 stderr: memory exhausted ./glr-regression.at:845: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 737. glr-regression.at:845: ok 741. glr-regression.at:1036: testing No users destructors if stack 0 deleted: glr.c ... ./glr-regression.at:1036: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr9.c glr-regr9.y ./glr-regression.at:1036: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o glr-regr9 glr-regr9.c $LIBS stderr: stdout: ./glr-regression.at:945: $PREPARSER ./glr-regr8 stderr: ./glr-regression.at:945: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 739. glr-regression.at:945: ok 742. glr-regression.at:1037: testing No users destructors if stack 0 deleted: glr.cc ... ./glr-regression.at:1037: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr9.cc glr-regr9.y ./glr-regression.at:1037: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o glr-regr9 glr-regr9.cc $LIBS stderr: stdout: ./glr-regression.at:1036: $PREPARSER ./glr-regr9 stderr: memory exhausted ./glr-regression.at:1036: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 741. glr-regression.at:1036: ok 743. glr-regression.at:1038: testing No users destructors if stack 0 deleted: glr2.cc ... ./glr-regression.at:1038: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr9.cc glr-regr9.y ./glr-regression.at:1038: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o glr-regr9 glr-regr9.cc $LIBS stderr: stdout: ./glr-regression.at:1037: $PREPARSER ./glr-regr9 stderr: memory exhausted ./glr-regression.at:1037: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 742. glr-regression.at:1037: ok 744. glr-regression.at:1102: testing Corrupted semantic options if user action cuts parse: glr.c ... ./glr-regression.at:1102: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr10.c glr-regr10.y ./glr-regression.at:1102: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o glr-regr10 glr-regr10.c $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from glr-regr8.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr8.cc:2470:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at glr-regr8.cc:2306:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr8.cc:2736:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr8.cc:2470:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr8.cc:2292:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&, const yy::parser::location_type&)' at glr-regr8.cc:3182:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr8.cc:2470:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr8.cc:2292:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr8.cc:3165:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at glr-regr8.cc:3153:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:946: $PREPARSER ./glr-regr8 stderr: ./glr-regression.at:946: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 740. glr-regression.at:946: ok stderr: stdout: ./glr-regression.at:1102: $PREPARSER ./glr-regr10 stderr: ./glr-regression.at:1102: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 744. glr-regression.at:1102: ok 745. glr-regression.at:1103: testing Corrupted semantic options if user action cuts parse: glr.cc ... ./glr-regression.at:1103: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr10.cc glr-regr10.y ./glr-regression.at:1103: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o glr-regr10 glr-regr10.cc $LIBS 746. glr-regression.at:1104: testing Corrupted semantic options if user action cuts parse: glr2.cc ... ./glr-regression.at:1104: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr10.cc glr-regr10.y ./glr-regression.at:1104: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o glr-regr10 glr-regr10.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from glr-regr9.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr9.cc:2178:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at glr-regr9.cc:2017:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr9.cc:2440:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr9.cc:2178:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr9.cc:2003:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at glr-regr9.cc:2868:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr9.cc:2178:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr9.cc:2003:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr9.cc:2851:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at glr-regr9.cc:2839:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:1038: $PREPARSER ./glr-regr9 stderr: memory exhausted ./glr-regression.at:1038: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 743. glr-regression.at:1038: ok 747. glr-regression.at:1174: testing Undesirable destructors if user action cuts parse: glr.c ... ./glr-regression.at:1174: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr11.c glr-regr11.y ./glr-regression.at:1174: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o glr-regr11 glr-regr11.c $LIBS stderr: stdout: ./glr-regression.at:1103: $PREPARSER ./glr-regr10 stderr: ./glr-regression.at:1103: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 745. glr-regression.at:1103: ok 748. glr-regression.at:1175: testing Undesirable destructors if user action cuts parse: glr.cc ... ./glr-regression.at:1175: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr11.cc glr-regr11.y ./glr-regression.at:1175: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o glr-regr11 glr-regr11.cc $LIBS stderr: stdout: ./glr-regression.at:1174: $PREPARSER ./glr-regr11 stderr: ./glr-regression.at:1174: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 747. glr-regression.at:1174: ok 749. glr-regression.at:1176: testing Undesirable destructors if user action cuts parse: glr2.cc ... ./glr-regression.at:1176: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr11.cc glr-regr11.y ./glr-regression.at:1176: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o glr-regr11 glr-regr11.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from glr-regr10.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr10.cc:2172:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at glr-regr10.cc:2011:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr10.cc:2434:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr10.cc:2172:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr10.cc:1997:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at glr-regr10.cc:2856:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr10.cc:2172:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr10.cc:1997:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr10.cc:2839:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at glr-regr10.cc:2827:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:1104: $PREPARSER ./glr-regr10 stderr: ./glr-regression.at:1104: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 746. glr-regression.at:1104: ok stderr: stdout: ./glr-regression.at:1175: $PREPARSER ./glr-regr11 stderr: ./glr-regression.at:1175: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 748. glr-regression.at:1175: ok 750. glr-regression.at:1310: testing Leaked semantic values if user action cuts parse: glr.c ... ./glr-regression.at:1310: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr12.c glr-regr12.y ./glr-regression.at:1310: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o glr-regr12 glr-regr12.c $LIBS 751. glr-regression.at:1311: testing Leaked semantic values if user action cuts parse: glr.cc ... ./glr-regression.at:1311: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr12.cc glr-regr12.y ./glr-regression.at:1311: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o glr-regr12 glr-regr12.cc $LIBS stderr: stdout: ./glr-regression.at:1310: $PREPARSER ./glr-regr12 stderr: ./glr-regression.at:1310: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 750. glr-regression.at:1310: ok 752. glr-regression.at:1312: testing Leaked semantic values if user action cuts parse: glr2.cc ... ./glr-regression.at:1312: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr12.cc glr-regr12.y ./glr-regression.at:1312: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o glr-regr12 glr-regr12.cc $LIBS stderr: stdout: ./glr-regression.at:1311: $PREPARSER ./glr-regr12 stderr: ./glr-regression.at:1311: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 751. glr-regression.at:1311: ok 753. glr-regression.at:1445: testing Incorrect lookahead during deterministic GLR: glr.c ... ./glr-regression.at:1445: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr13.c glr-regr13.y ./glr-regression.at:1445: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o glr-regr13 glr-regr13.c $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from glr-regr11.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr11.cc:2173:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at glr-regr11.cc:2012:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr11.cc:2435:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr11.cc:2173:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr11.cc:1998:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at glr-regr11.cc:2857:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr11.cc:2173:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr11.cc:1998:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr11.cc:2840:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at glr-regr11.cc:2828:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:1176: $PREPARSER ./glr-regr11 stderr: ./glr-regression.at:1176: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 749. glr-regression.at:1176: ok 754. glr-regression.at:1446: testing Incorrect lookahead during deterministic GLR: glr.cc ... ./glr-regression.at:1446: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr13.cc glr-regr13.y ./glr-regression.at:1446: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o glr-regr13 glr-regr13.cc $LIBS stderr: stdout: ./glr-regression.at:1445: $PREPARSER ./glr-regr13 stderr: ./glr-regression.at:1445: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 753. glr-regression.at:1445: ok 755. glr-regression.at:1447: testing Incorrect lookahead during deterministic GLR: glr2.cc ... ./glr-regression.at:1447: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr13.cc glr-regr13.y ./glr-regression.at:1447: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o glr-regr13 glr-regr13.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from glr-regr12.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr12.cc:2183:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at glr-regr12.cc:2022:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr12.cc:2445:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr12.cc:2183:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr12.cc:2008:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at glr-regr12.cc:2908:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr12.cc:2183:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr12.cc:2008:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr12.cc:2891:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at glr-regr12.cc:2879:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:1312: $PREPARSER ./glr-regr12 stderr: ./glr-regression.at:1312: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 752. glr-regression.at:1312: ok 756. glr-regression.at:1678: testing Incorrect lookahead during nondeterministic GLR: glr.c ... ./glr-regression.at:1678: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr14.c glr-regr14.y ./glr-regression.at:1678: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o glr-regr14 glr-regr14.c $LIBS stderr: stdout: ./glr-regression.at:1446: $PREPARSER ./glr-regr13 stderr: ./glr-regression.at:1446: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 754. glr-regression.at:1446: ok 757. glr-regression.at:1679: testing Incorrect lookahead during nondeterministic GLR: glr.cc ... ./glr-regression.at:1679: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr14.cc glr-regr14.y ./glr-regression.at:1679: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o glr-regr14 glr-regr14.cc $LIBS stderr: stdout: ./glr-regression.at:1678: $PREPARSER ./glr-regr14 stderr: ./glr-regression.at:1678: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 756. glr-regression.at:1678: ok 758. glr-regression.at:1680: testing Incorrect lookahead during nondeterministic GLR: glr2.cc ... ./glr-regression.at:1680: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr14.cc glr-regr14.y ./glr-regression.at:1680: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o glr-regr14 glr-regr14.cc $LIBS stderr: stdout: ./glr-regression.at:1679: $PREPARSER ./glr-regr14 stderr: ./glr-regression.at:1679: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 757. glr-regression.at:1679: ok 759. glr-regression.at:1785: testing Leaked semantic values when reporting ambiguity: glr.c ... ./glr-regression.at:1785: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr15.c glr-regr15.y ./glr-regression.at:1785: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o glr-regr15 glr-regr15.c $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from glr-regr13.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr13.cc:2485:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at glr-regr13.cc:2321:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr13.cc:2751:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr13.cc:2485:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr13.cc:2307:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&, const yy::parser::location_type&)' at glr-regr13.cc:3241:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr13.cc:2485:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr13.cc:2307:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr13.cc:3224:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at glr-regr13.cc:3212:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:1447: $PREPARSER ./glr-regr13 stderr: ./glr-regression.at:1447: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 755. glr-regression.at:1447: ok 760. glr-regression.at:1786: testing Leaked semantic values when reporting ambiguity: glr.cc ... ./glr-regression.at:1786: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr15.cc glr-regr15.y ./glr-regression.at:1786: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o glr-regr15 glr-regr15.cc $LIBS stderr: stdout: ./glr-regression.at:1785: $PREPARSER ./glr-regr15 stderr: Ambiguity detected. Option 1, ambiguity -> ambiguity1 -> Option 2, ambiguity -> ambiguity2 -> syntax is ambiguous ./glr-regression.at:1785: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 759. glr-regression.at:1785: ok 761. glr-regression.at:1787: testing Leaked semantic values when reporting ambiguity: glr2.cc ... ./glr-regression.at:1787: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr15.cc glr-regr15.y ./glr-regression.at:1787: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o glr-regr15 glr-regr15.cc $LIBS stderr: stdout: ./glr-regression.at:1786: $PREPARSER ./glr-regr15 stderr: Ambiguity detected. Option 1, ambiguity -> ambiguity1 -> Option 2, ambiguity -> ambiguity2 -> syntax is ambiguous ./glr-regression.at:1786: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 760. glr-regression.at:1786: ok 762. glr-regression.at:1860: testing Leaked lookahead after nondeterministic parse syntax error: glr.c ... ./glr-regression.at:1860: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr16.c glr-regr16.y ./glr-regression.at:1860: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o glr-regr16 glr-regr16.c $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from glr-regr14.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr14.cc:2508:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at glr-regr14.cc:2344:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr14.cc:2774:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr14.cc:2508:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr14.cc:2330:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&, const yy::parser::location_type&)' at glr-regr14.cc:3345:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr14.cc:2508:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr14.cc:2330:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr14.cc:3328:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at glr-regr14.cc:3316:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:1680: $PREPARSER ./glr-regr14 stderr: ./glr-regression.at:1680: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 758. glr-regression.at:1680: ok 763. glr-regression.at:1861: testing Leaked lookahead after nondeterministic parse syntax error: glr.cc ... ./glr-regression.at:1861: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr16.cc glr-regr16.y ./glr-regression.at:1861: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o glr-regr16 glr-regr16.cc $LIBS stderr: stdout: ./glr-regression.at:1860: $PREPARSER ./glr-regr16 stderr: syntax error ./glr-regression.at:1860: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 762. glr-regression.at:1860: ok 764. glr-regression.at:1862: testing Leaked lookahead after nondeterministic parse syntax error: glr2.cc ... ./glr-regression.at:1862: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr16.cc glr-regr16.y ./glr-regression.at:1862: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o glr-regr16 glr-regr16.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from glr-regr15.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr15.cc:2177:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at glr-regr15.cc:2016:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr15.cc:2439:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr15.cc:2177:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr15.cc:2002:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at glr-regr15.cc:2867:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr15.cc:2177:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr15.cc:2002:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr15.cc:2850:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at glr-regr15.cc:2838:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:1787: $PREPARSER ./glr-regr15 stderr: Ambiguity detected. Option 1, ambiguity -> ambiguity1 -> Option 2, ambiguity -> ambiguity2 -> syntax is ambiguous ./glr-regression.at:1787: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr stderr: stdout: ./glr-regression.at:1861: $PREPARSER ./glr-regr16 761. glr-regression.at:1787: ok stderr: syntax error ./glr-regression.at:1861: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 763. glr-regression.at:1861: ok 765. glr-regression.at:1964: testing Uninitialized location when reporting ambiguity: glr.c api.pure ... ./glr-regression.at:1964: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr17.c glr-regr17.y 766. glr-regression.at:1965: testing Uninitialized location when reporting ambiguity: glr.cc ... ./glr-regression.at:1965: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr17.cc glr-regr17.y ./glr-regression.at:1964: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o glr-regr17 glr-regr17.c $LIBS ./glr-regression.at:1965: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o glr-regr17 glr-regr17.cc $LIBS stderr: stdout: ./glr-regression.at:1964: $PREPARSER ./glr-regr17 stderr: Ambiguity detected. Option 1, start -> ambig1 -> sub_ambig2 -> empty2 -> 'a' 'b' empty1 -> Option 2, start -> ambig2 -> sub_ambig2 -> empty2 -> 'a' 'b' empty2 -> 1.1-2.2: syntax is ambiguous ./glr-regression.at:1964: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 765. glr-regression.at:1964: ok 767. glr-regression.at:1966: testing Uninitialized location when reporting ambiguity: glr2.cc ... ./glr-regression.at:1966: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -rall -o glr-regr17.cc glr-regr17.y ./glr-regression.at:1966: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o glr-regr17 glr-regr17.cc $LIBS stderr: stdout: ./glr-regression.at:1965: $PREPARSER ./glr-regr17 stderr: Ambiguity detected. Option 1, start -> ambig1 -> sub_ambig2 -> empty2 -> 'a' 'b' empty1 -> Option 2, start -> ambig2 -> sub_ambig2 -> empty2 -> 'a' 'b' empty2 -> 1.1-2.2: syntax is ambiguous ./glr-regression.at:1965: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 766. glr-regression.at:1965: ok stderr: In file included from /usr/include/c++/12/vector:70, from glr-regr16.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr16.cc:2170:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at glr-regr16.cc:2009:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr16.cc:2432:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr16.cc:2170:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr16.cc:1995:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at glr-regr16.cc:2842:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr16.cc:2170:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr16.cc:1995:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr16.cc:2825:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at glr-regr16.cc:2813:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:1862: $PREPARSER ./glr-regr16 stderr: syntax error ./glr-regression.at:1862: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 764. glr-regression.at:1862: ok 768. glr-regression.at:2035: testing Missed %merge type warnings when LHS type is declared later: glr.c ... ./glr-regression.at:2035: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o glr-regr18.c -rall -fcaret glr-regr18.y 768. glr-regression.at:2035: ok 770. glr-regression.at:2037: testing Missed %merge type warnings when LHS type is declared later: glr2.cc ... ./glr-regression.at:2037: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o glr-regr18.c -rall -fcaret glr-regr18.y 769. glr-regression.at:2036: testing Missed %merge type warnings when LHS type is declared later: glr.cc ... ./glr-regression.at:2036: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; VALGRIND_OPTS="$VALGRIND_OPTS --leak-check=summary --show-reachable=no"; export VALGRIND_OPTS; bison --color=no -fno-caret -o glr-regr18.c -rall -fcaret glr-regr18.y 770. glr-regression.at:2037: ok 769. glr-regression.at:2036: ok 771. glr-regression.at:2149: testing Ambiguity reports: glr.c ... ./glr-regression.at:2149: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y 772. glr-regression.at:2150: testing Ambiguity reports: glr.cc ... ./glr-regression.at:2150: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./glr-regression.at:2149: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS ./glr-regression.at:2150: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./glr-regression.at:2149: $PREPARSER ./input --debug stderr: Starting parse Entering state 0 Reading a token Next token is token 'a' () Shifting token 'a' () Entering state 1 Reading a token Next token is token 'b' () Shifting token 'b' () Entering state 3 Reducing stack 0 by rule 3 (line 30): $1 = token 'b' () -> $$ = nterm b () Entering state 4 Reading a token Next token is token 'c' () Shifting token 'c' () Entering state 6 Reducing stack 0 by rule 4 (line 31): -> $$ = nterm d () Entering state 7 Reading a token Now at end of input. Stack 0 Entering state 7 Now at end of input. Splitting off stack 1 from 0. Reduced stack 1 by rule 2 (line 28); action deferred. Now in state 2. Stack 1 Entering state 2 Now at end of input. Reduced stack 0 by rule 1 (line 27); action deferred. Now in state 2. Merging stack 0 into stack 1. Stack 1 Entering state 2 Now at end of input. Removing dead stacks. Rename stack 1 -> 0. On stack 0, shifting token "end of file" () Stack 0 now in state 5 Ambiguity detected. Option 1, start -> 'a' b 'c' d Option 2, start -> 'a' b 'c' d syntax is ambiguous Cleanup: popping token "end of file" () Cleanup: popping unresolved nterm start () Cleanup: popping nterm d () Cleanup: popping token 'c' () Cleanup: popping nterm b () Cleanup: popping token 'a' () ./glr-regression.at:2149: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 771. glr-regression.at:2149: ok 773. glr-regression.at:2151: testing Ambiguity reports: glr2.cc ... ./glr-regression.at:2151: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./glr-regression.at:2151: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: stdout: ./glr-regression.at:2150: $PREPARSER ./input --debug stderr: Starting parse Entering state 0 Reading a token Next token is token 'a' () Shifting token 'a' () Entering state 1 Reading a token Next token is token 'b' () Shifting token 'b' () Entering state 3 Reducing stack 0 by rule 3 (line 30): $1 = token 'b' () -> $$ = nterm b () Entering state 4 Reading a token Next token is token 'c' () Shifting token 'c' () Entering state 6 Reducing stack 0 by rule 4 (line 31): -> $$ = nterm d () Entering state 7 Reading a token Now at end of input. Stack 0 Entering state 7 Now at end of input. Splitting off stack 1 from 0. Reduced stack 1 by rule 2 (line 28); action deferred. Now in state 2. Stack 1 Entering state 2 Now at end of input. Reduced stack 0 by rule 1 (line 27); action deferred. Now in state 2. Merging stack 0 into stack 1. Stack 1 Entering state 2 Now at end of input. Removing dead stacks. Rename stack 1 -> 0. On stack 0, shifting token "end of file" () Stack 0 now in state 5 Ambiguity detected. Option 1, start -> 'a' b 'c' d Option 2, start -> 'a' b 'c' d syntax is ambiguous Cleanup: popping token "end of file" () Cleanup: popping unresolved nterm start () Cleanup: popping nterm d () Cleanup: popping token 'c' () Cleanup: popping nterm b () Cleanup: popping token 'a' () ./glr-regression.at:2150: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 772. glr-regression.at:2150: ok 774. glr-regression.at:2229: testing Predicates: glr.c ... ./glr-regression.at:2229: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.c input.y ./glr-regression.at:2229: $CC $CFLAGS $CPPFLAGS $LDFLAGS -o input input.c $LIBS stderr: stdout: ./glr-regression.at:2229: $PREPARSER ./input Nwin stderr: ./glr-regression.at:2229: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./glr-regression.at:2229: $PREPARSER ./input Owin stderr: syntax error, unexpected 'n', expecting 'o' ./glr-regression.at:2229: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./glr-regression.at:2229: $PREPARSER ./input Owio stderr: ./glr-regression.at:2229: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./glr-regression.at:2229: $PREPARSER ./input Nwio stderr: syntax error, unexpected 'o', expecting 'n' ./glr-regression.at:2229: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 774. glr-regression.at:2229: ok 775. glr-regression.at:2230: testing Predicates: glr.cc ... ./glr-regression.at:2230: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./glr-regression.at:2230: $CXX $CPPFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from glr-regr17.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr17.cc:2509:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at glr-regr17.cc:2345:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr17.cc:2778:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr17.cc:2509:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr17.cc:2331:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&, const yy::parser::location_type&)' at glr-regr17.cc:3210:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at glr-regr17.cc:2509:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at glr-regr17.cc:2331:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at glr-regr17.cc:3193:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at glr-regr17.cc:3181:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:1966: $PREPARSER ./glr-regr17 stderr: Ambiguity detected. Option 1, start -> ambig1 -> sub_ambig2 -> empty2 -> 'a' 'b' empty1 -> Option 2, start -> ambig2 -> sub_ambig2 -> empty2 -> 'a' 'b' empty2 -> 1.1-2.2: syntax is ambiguous ./glr-regression.at:1966: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 767. glr-regression.at:1966: ok 776. glr-regression.at:2231: testing Predicates: glr2.cc ... ./glr-regression.at:2231: COLUMNS=1000; export COLUMNS; NO_TERM_HYPERLINKS=1; export NO_TERM_HYPERLINKS; bison --color=no -fno-caret -o input.cc input.y ./glr-regression.at:2231: $CXX $CPPFLAGS $CXX11_CXXFLAGS $CXXFLAGS $LDFLAGS -o input input.cc $LIBS stderr: In file included from /usr/include/c++/12/vector:70, from input.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2168:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at input.cc:2007:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2430:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2168:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1993:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at input.cc:2840:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2168:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:1993:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2823:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at input.cc:2811:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:2151: $PREPARSER ./input --debug stderr: Starting parse Entering state 0 Reading a token Next token is token 'a' () Shifting token 'a' () Entering state 1 Reading a token Next token is token 'b' () Shifting token 'b' () Entering state 3 Reducing stack 0 by rule 3 (line 30): $1 = token 'b' () -> $$ = nterm b () Entering state 4 Reading a token Next token is token 'c' () Shifting token 'c' () Entering state 6 Reducing stack 0 by rule 4 (line 31): -> $$ = nterm d () Entering state 7 Reading a token Now at end of input. Stack 0 Entering state 7 Now at end of input. Splitting off stack 1 from 0. Reduced stack 1 by rule 2 (line 28); action deferred. Now in state 2. Stack 1 Entering state 2 Now at end of input. Reduced stack 0 by rule 1 (line 27); action deferred. Now in state 2. Merging stack 0 into stack 1. Stack 1 Entering state 2 Now at end of input. Removing dead stacks. Rename stack 1 -> 0. On stack 0, shifting token "end of file" () Stack 0 now in state 5 Ambiguity detected. Option 1, start -> 'a' b 'c' d Option 2, start -> 'a' b 'c' d syntax is ambiguous Cleanup: popping token "end of file" () Cleanup: popping unresolved nterm start () Cleanup: popping nterm d () Cleanup: popping token 'c' () Cleanup: popping nterm b () Cleanup: popping token 'a' () ./glr-regression.at:2151: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 773. glr-regression.at:2151: ok stderr: stdout: ./glr-regression.at:2230: $PREPARSER ./input Nwin stderr: ./glr-regression.at:2230: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./glr-regression.at:2230: $PREPARSER ./input Owin stderr: syntax error, unexpected 'n', expecting 'o' ./glr-regression.at:2230: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./glr-regression.at:2230: $PREPARSER ./input Owio stderr: ./glr-regression.at:2230: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./glr-regression.at:2230: $PREPARSER ./input Nwio stderr: syntax error, unexpected 'o', expecting 'n' ./glr-regression.at:2230: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 775. glr-regression.at:2230: ok stderr: In file included from /usr/include/c++/12/vector:70, from input.cc:86: /usr/include/c++/12/bits/vector.tcc: In member function 'void std::vector<_Tp, _Alloc>::_M_realloc_insert(iterator, _Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]': /usr/include/c++/12/bits/vector.tcc:439:7: note: parameter passing for argument of type 'std::vector<{anonymous}::glr_stack_item>::iterator' changed in GCC 7.1 439 | vector<_Tp, _Alloc>:: | ^~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2201:24, inlined from '{anonymous}::semantic_option& {anonymous}::state_stack::yynewSemanticOption({anonymous}::semantic_option)' at input.cc:2040:66, inlined from 'void yy::parser::glr_stack::yyaddDeferredAction({anonymous}::state_set_index, yy::parser::glr_state*, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2466:65: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2201:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:2026:58, inlined from 'void yy::parser::glr_stack::yyglrShift({anonymous}::state_set_index, {anonymous}::state_num, size_t, const yy::parser::value_type&)' at input.cc:2903:58: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ In member function 'void std::vector<_Tp, _Alloc>::emplace_back(_Args&& ...) [with _Args = {{anonymous}::glr_stack_item}; _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]', inlined from 'void std::vector<_Tp, _Alloc>::push_back(value_type&&) [with _Tp = {anonymous}::glr_stack_item; _Alloc = std::allocator<{anonymous}::glr_stack_item>]' at /usr/include/c++/12/bits/stl_vector.h:1294:21, inlined from 'size_t {anonymous}::state_stack::yynewGLRStackItem(bool)' at input.cc:2201:24, inlined from '{anonymous}::glr_state& {anonymous}::state_stack::yynewGLRState(const {anonymous}::glr_state&)' at input.cc:2026:58, inlined from 'void yy::parser::glr_stack::yyglrShiftDefer({anonymous}::state_set_index, {anonymous}::state_num, size_t, yy::parser::glr_state*, {anonymous}::rule_num)' at input.cc:2886:58, inlined from 'YYRESULTTAG yy::parser::glr_stack::yyglrReduce({anonymous}::state_set_index, {anonymous}::rule_num, bool)' at input.cc:2874:27: /usr/include/c++/12/bits/vector.tcc:123:28: note: parameter passing for argument of type '__gnu_cxx::__normal_iterator<{anonymous}::glr_stack_item*, std::vector<{anonymous}::glr_stack_item> >' changed in GCC 7.1 123 | _M_realloc_insert(end(), std::forward<_Args>(__args)...); | ~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ stdout: ./glr-regression.at:2231: $PREPARSER ./input Nwin stderr: ./glr-regression.at:2231: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./glr-regression.at:2231: $PREPARSER ./input Owin stderr: syntax error, unexpected 'n', expecting 'o' ./glr-regression.at:2231: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./glr-regression.at:2231: $PREPARSER ./input Owio stderr: ./glr-regression.at:2231: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr ./glr-regression.at:2231: $PREPARSER ./input Nwio stderr: syntax error, unexpected 'o', expecting 'n' ./glr-regression.at:2231: sed >&2 -e '/^profiling:.*:Merge mismatch for summaries/d' stderr 776. glr-regression.at:2231: ok ## ------------- ## ## Test results. ## ## ------------- ## 712 tests were successful. 64 tests were skipped. make[4]: Leaving directory '/build/bison-3.8.2+dfsg' make[3]: Leaving directory '/build/bison-3.8.2+dfsg' make[2]: Leaving directory '/build/bison-3.8.2+dfsg' make[1]: Leaving directory '/build/bison-3.8.2+dfsg' create-stamp debian/debhelper-build-stamp fakeroot debian/rules binary dh binary dh_testroot dh_prep dh_auto_install make -j3 install DESTDIR=/build/bison-3.8.2\+dfsg/debian/tmp AM_UPDATE_INFO_DIR=no make[1]: Entering directory '/build/bison-3.8.2+dfsg' make install-recursive make[2]: Entering directory '/build/bison-3.8.2+dfsg' Making install in po make[3]: Entering directory '/build/bison-3.8.2+dfsg/po' installing bg.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/bg/LC_MESSAGES/bison.mo installing ca.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ca/LC_MESSAGES/bison.mo installing da.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/da/LC_MESSAGES/bison.mo installing de.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/de/LC_MESSAGES/bison.mo installing el.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/el/LC_MESSAGES/bison.mo installing eo.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/eo/LC_MESSAGES/bison.mo installing es.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/es/LC_MESSAGES/bison.mo installing et.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/et/LC_MESSAGES/bison.mo installing fi.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/fi/LC_MESSAGES/bison.mo installing fr.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/fr/LC_MESSAGES/bison.mo installing ga.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ga/LC_MESSAGES/bison.mo installing hr.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/hr/LC_MESSAGES/bison.mo installing id.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/id/LC_MESSAGES/bison.mo installing it.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/it/LC_MESSAGES/bison.mo installing ja.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ja/LC_MESSAGES/bison.mo installing ms.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ms/LC_MESSAGES/bison.mo installing nb.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/nb/LC_MESSAGES/bison.mo installing nl.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/nl/LC_MESSAGES/bison.mo installing pl.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/pl/LC_MESSAGES/bison.mo installing pt.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/pt/LC_MESSAGES/bison.mo installing pt_BR.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/pt_BR/LC_MESSAGES/bison.mo installing ro.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ro/LC_MESSAGES/bison.mo installing ru.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ru/LC_MESSAGES/bison.mo installing sr.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/sr/LC_MESSAGES/bison.mo installing sv.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/sv/LC_MESSAGES/bison.mo installing tr.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/tr/LC_MESSAGES/bison.mo installing uk.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/uk/LC_MESSAGES/bison.mo installing vi.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/vi/LC_MESSAGES/bison.mo installing zh_CN.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/zh_CN/LC_MESSAGES/bison.mo installing zh_TW.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/zh_TW/LC_MESSAGES/bison.mo if test "bison" = "gettext-tools"; then \ /bin/mkdir -p /build/bison-3.8.2+dfsg/debian/tmp/usr/share/gettext/po; \ for file in Makefile.in.in remove-potcdate.sin quot.sed boldquot.sed en@quot.header en@boldquot.header insert-header.sin Rules-quot Makevars.template; do \ /usr/bin/install -c -m 644 ./$file \ /build/bison-3.8.2+dfsg/debian/tmp/usr/share/gettext/po/$file; \ done; \ for file in Makevars; do \ rm -f /build/bison-3.8.2+dfsg/debian/tmp/usr/share/gettext/po/$file; \ done; \ else \ : ; \ fi make[3]: Leaving directory '/build/bison-3.8.2+dfsg/po' Making install in runtime-po make[3]: Entering directory '/build/bison-3.8.2+dfsg/runtime-po' installing ast.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ast/LC_MESSAGES/bison-runtime.mo installing bg.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/bg/LC_MESSAGES/bison-runtime.mo installing ca.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ca/LC_MESSAGES/bison-runtime.mo installing da.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/da/LC_MESSAGES/bison-runtime.mo installing de.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/de/LC_MESSAGES/bison-runtime.mo installing el.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/el/LC_MESSAGES/bison-runtime.mo installing eo.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/eo/LC_MESSAGES/bison-runtime.mo installing es.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/es/LC_MESSAGES/bison-runtime.mo installing et.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/et/LC_MESSAGES/bison-runtime.mo installing fi.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/fi/LC_MESSAGES/bison-runtime.mo installing fr.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/fr/LC_MESSAGES/bison-runtime.mo installing ga.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ga/LC_MESSAGES/bison-runtime.mo installing gl.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/gl/LC_MESSAGES/bison-runtime.mo installing hr.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/hr/LC_MESSAGES/bison-runtime.mo installing hu.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/hu/LC_MESSAGES/bison-runtime.mo installing ia.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ia/LC_MESSAGES/bison-runtime.mo installing id.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/id/LC_MESSAGES/bison-runtime.mo installing it.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/it/LC_MESSAGES/bison-runtime.mo installing ja.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ja/LC_MESSAGES/bison-runtime.mo installing ky.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ky/LC_MESSAGES/bison-runtime.mo installing lt.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/lt/LC_MESSAGES/bison-runtime.mo installing lv.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/lv/LC_MESSAGES/bison-runtime.mo installing ms.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ms/LC_MESSAGES/bison-runtime.mo installing nb.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/nb/LC_MESSAGES/bison-runtime.mo installing nl.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/nl/LC_MESSAGES/bison-runtime.mo installing pl.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/pl/LC_MESSAGES/bison-runtime.mo installing pt.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/pt/LC_MESSAGES/bison-runtime.mo installing pt_BR.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/pt_BR/LC_MESSAGES/bison-runtime.mo installing ro.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ro/LC_MESSAGES/bison-runtime.mo installing ru.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ru/LC_MESSAGES/bison-runtime.mo installing sl.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/sl/LC_MESSAGES/bison-runtime.mo installing sq.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/sq/LC_MESSAGES/bison-runtime.mo installing sr.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/sr/LC_MESSAGES/bison-runtime.mo installing sv.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/sv/LC_MESSAGES/bison-runtime.mo installing ta.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ta/LC_MESSAGES/bison-runtime.mo installing th.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/th/LC_MESSAGES/bison-runtime.mo installing tr.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/tr/LC_MESSAGES/bison-runtime.mo installing uk.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/uk/LC_MESSAGES/bison-runtime.mo installing vi.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/vi/LC_MESSAGES/bison-runtime.mo installing zh_CN.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/zh_CN/LC_MESSAGES/bison-runtime.mo installing zh_TW.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/zh_TW/LC_MESSAGES/bison-runtime.mo if test "bison" = "gettext-tools"; then \ /bin/mkdir -p /build/bison-3.8.2+dfsg/debian/tmp/usr/share/gettext/po; \ for file in Makefile.in.in remove-potcdate.sin quot.sed boldquot.sed en@quot.header en@boldquot.header insert-header.sin Rules-quot Makevars.template; do \ /usr/bin/install -c -m 644 ./$file \ /build/bison-3.8.2+dfsg/debian/tmp/usr/share/gettext/po/$file; \ done; \ for file in Makevars; do \ rm -f /build/bison-3.8.2+dfsg/debian/tmp/usr/share/gettext/po/$file; \ done; \ else \ : ; \ fi make[3]: Leaving directory '/build/bison-3.8.2+dfsg/runtime-po' Making install in gnulib-po make[3]: Entering directory '/build/bison-3.8.2+dfsg/gnulib-po' installing af.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/af/LC_MESSAGES/bison-gnulib.mo installing be.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/be/LC_MESSAGES/bison-gnulib.mo installing bg.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/bg/LC_MESSAGES/bison-gnulib.mo installing ca.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ca/LC_MESSAGES/bison-gnulib.mo installing cs.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/cs/LC_MESSAGES/bison-gnulib.mo installing da.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/da/LC_MESSAGES/bison-gnulib.mo installing de.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/de/LC_MESSAGES/bison-gnulib.mo installing el.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/el/LC_MESSAGES/bison-gnulib.mo installing eo.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/eo/LC_MESSAGES/bison-gnulib.mo installing es.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/es/LC_MESSAGES/bison-gnulib.mo installing et.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/et/LC_MESSAGES/bison-gnulib.mo installing eu.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/eu/LC_MESSAGES/bison-gnulib.mo installing fi.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/fi/LC_MESSAGES/bison-gnulib.mo installing fr.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/fr/LC_MESSAGES/bison-gnulib.mo installing ga.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ga/LC_MESSAGES/bison-gnulib.mo installing gl.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/gl/LC_MESSAGES/bison-gnulib.mo installing hu.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/hu/LC_MESSAGES/bison-gnulib.mo installing it.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/it/LC_MESSAGES/bison-gnulib.mo installing ja.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ja/LC_MESSAGES/bison-gnulib.mo installing ko.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ko/LC_MESSAGES/bison-gnulib.mo installing ms.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ms/LC_MESSAGES/bison-gnulib.mo installing nb.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/nb/LC_MESSAGES/bison-gnulib.mo installing nl.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/nl/LC_MESSAGES/bison-gnulib.mo installing pl.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/pl/LC_MESSAGES/bison-gnulib.mo installing pt.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/pt/LC_MESSAGES/bison-gnulib.mo installing pt_BR.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/pt_BR/LC_MESSAGES/bison-gnulib.mo installing ro.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ro/LC_MESSAGES/bison-gnulib.mo installing ru.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/ru/LC_MESSAGES/bison-gnulib.mo installing rw.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/rw/LC_MESSAGES/bison-gnulib.mo installing sk.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/sk/LC_MESSAGES/bison-gnulib.mo installing sl.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/sl/LC_MESSAGES/bison-gnulib.mo installing sr.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/sr/LC_MESSAGES/bison-gnulib.mo installing sv.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/sv/LC_MESSAGES/bison-gnulib.mo installing tr.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/tr/LC_MESSAGES/bison-gnulib.mo installing uk.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/uk/LC_MESSAGES/bison-gnulib.mo installing vi.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/vi/LC_MESSAGES/bison-gnulib.mo installing zh_CN.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/zh_CN/LC_MESSAGES/bison-gnulib.mo installing zh_TW.gmo as /build/bison-3.8.2+dfsg/debian/tmp/usr/share/locale/zh_TW/LC_MESSAGES/bison-gnulib.mo if test "bison" = "gettext-tools"; then \ /bin/mkdir -p /build/bison-3.8.2+dfsg/debian/tmp/usr/share/gettext/po; \ for file in Makefile.in.in remove-potcdate.sin quot.sed boldquot.sed en@quot.header en@boldquot.header insert-header.sin Rules-quot Makevars.template; do \ /usr/bin/install -c -m 644 ./$file \ /build/bison-3.8.2+dfsg/debian/tmp/usr/share/gettext/po/$file; \ done; \ for file in Makevars; do \ rm -f /build/bison-3.8.2+dfsg/debian/tmp/usr/share/gettext/po/$file; \ done; \ else \ : ; \ fi make[3]: Leaving directory '/build/bison-3.8.2+dfsg/gnulib-po' Making install in . make[3]: Entering directory '/build/bison-3.8.2+dfsg' /bin/mkdir -p doc LC_ALL=C tests/bison --version >doc/bison.help.tmp LC_ALL=C tests/bison --help | \ sed -e 's,^Usage: .*/bison \[OPTION\],Usage: bison [OPTION],g' \ -e '/translation bugs/d' >>doc/bison.help.tmp ./build-aux/move-if-change doc/bison.help.tmp doc/bison.help make[4]: Entering directory '/build/bison-3.8.2+dfsg' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/lib/arm-linux-gnueabihf' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/bin' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/bin' /usr/bin/install -c -m 644 lib/liby.a '/build/bison-3.8.2+dfsg/debian/tmp/usr/lib/arm-linux-gnueabihf' /usr/bin/install -c src/bison '/build/bison-3.8.2+dfsg/debian/tmp/usr/bin' /usr/bin/install -c src/yacc '/build/bison-3.8.2+dfsg/debian/tmp/usr/bin' ( cd '/build/bison-3.8.2+dfsg/debian/tmp/usr/lib/arm-linux-gnueabihf' && ranlib liby.a ) /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/aclocal' /usr/bin/install -c -m 644 m4/bison-i18n.m4 '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/aclocal' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c++/calc++' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c++' /usr/bin/install -c -m 644 examples/c++/calc++/driver.cc examples/c++/calc++/driver.hh examples/c++/calc++/scanner.ll examples/c++/calc++/calc++.cc examples/c++/calc++/parser.yy '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c++/calc++' /usr/bin/install -c -m 644 examples/c++/simple.yy '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c++' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/bistromathic' /usr/bin/install -c -m 644 examples/c/bistromathic/parse.y examples/c/bistromathic/Makefile examples/c/bistromathic/README.md '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/bistromathic' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c' /usr/bin/install -c -m 644 examples/c/README.md '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/calc' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/d/calc' /usr/bin/install -c -m 644 examples/c/calc/calc.y examples/c/calc/Makefile examples/c/calc/README.md '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/calc' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c++/calc++' /usr/bin/install -c -m 644 examples/d/calc/calc.y examples/d/calc/Makefile '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/d/calc' /usr/bin/install -c -m 644 examples/c++/calc++/README.md examples/c++/calc++/Makefile '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c++/calc++' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c++' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/d' /usr/bin/install -c -m 644 examples/c++/README.md examples/c++/Makefile examples/c++/variant.yy examples/c++/variant-11.yy '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c++' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison' /usr/bin/install -c -m 644 examples/d/README.md '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/d' /usr/bin/install -c -m 644 AUTHORS COPYING NEWS README THANKS TODO '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/glr' /usr/bin/install -c -m 644 examples/README.md '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples' /usr/bin/install -c -m 644 examples/c/glr/c++-types.y examples/c/glr/Makefile examples/c/glr/README.md '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/glr' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/java' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/java/calc' /usr/bin/install -c -m 644 examples/java/README.md '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/java' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/java/simple' /usr/bin/install -c -m 644 examples/java/calc/Calc.y examples/java/calc/Makefile '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/java/calc' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/lexcalc' /usr/bin/install -c -m 644 examples/java/simple/Calc.y examples/java/simple/Makefile '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/java/simple' /usr/bin/install -c -m 644 examples/c/lexcalc/parse.y examples/c/lexcalc/scan.l examples/c/lexcalc/Makefile examples/c/lexcalc/README.md '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/lexcalc' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/bison/m4sugar' /usr/bin/install -c -m 644 data/m4sugar/foreach.m4 data/m4sugar/m4sugar.m4 '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/bison/m4sugar' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/mfcalc' /usr/bin/install -c -m 644 examples/c/mfcalc/Makefile '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/mfcalc' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/bison' /usr/bin/install -c -m 644 data/README.md data/bison-default.css '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/bison' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/pushcalc' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/reccalc' /usr/bin/install -c -m 644 examples/c/pushcalc/calc.y examples/c/pushcalc/Makefile examples/c/pushcalc/README.md '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/pushcalc' /usr/bin/install -c -m 644 examples/c/reccalc/parse.y examples/c/reccalc/scan.l examples/c/reccalc/Makefile examples/c/reccalc/README.md '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/reccalc' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/rpcalc' /usr/bin/install -c -m 644 examples/c/rpcalc/Makefile '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/rpcalc' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/d/simple' /usr/bin/install -c -m 644 examples/d/simple/calc.y examples/d/simple/Makefile '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/d/simple' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/bison/skeletons' /bin/mkdir -p doc /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/bison/xslt' /usr/bin/install -c -m 644 data/xslt/bison.xsl data/xslt/xml2dot.xsl data/xslt/xml2text.xsl data/xslt/xml2xhtml.xsl '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/bison/xslt' LC_ALL=C tests/bison --version >doc/bison.help.tmp /usr/bin/install -c -m 644 data/skeletons/bison.m4 data/skeletons/c++-skel.m4 data/skeletons/c++.m4 data/skeletons/c-like.m4 data/skeletons/c-skel.m4 data/skeletons/c.m4 data/skeletons/glr.c data/skeletons/glr.cc data/skeletons/glr2.cc data/skeletons/java-skel.m4 data/skeletons/java.m4 data/skeletons/lalr1.cc data/skeletons/lalr1.java data/skeletons/location.cc data/skeletons/stack.hh data/skeletons/traceon.m4 data/skeletons/variant.hh data/skeletons/yacc.c data/skeletons/d-skel.m4 data/skeletons/d.m4 data/skeletons/lalr1.d '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/bison/skeletons' LC_ALL=C tests/bison --help | \ sed -e 's,^Usage: .*/bison \[OPTION\],Usage: bison [OPTION],g' \ -e '/translation bugs/d' >>doc/bison.help.tmp /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/mfcalc' /usr/bin/install -c -m 644 examples/c/mfcalc/calc.h examples/c/mfcalc/mfcalc.y '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/mfcalc' ./build-aux/move-if-change doc/bison.help.tmp doc/bison.help /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/rpcalc' /usr/bin/install -c -m 644 examples/c/rpcalc/rpcalc.y '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/doc/bison/examples/c/rpcalc' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/man/man1' /bin/mkdir -p '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/info' /usr/bin/install -c -m 644 ./doc/bison.1 doc/yacc.1 '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/man/man1' /usr/bin/install -c -m 644 ./doc/bison.info '/build/bison-3.8.2+dfsg/debian/tmp/usr/share/info' make[4]: Leaving directory '/build/bison-3.8.2+dfsg' make[3]: Leaving directory '/build/bison-3.8.2+dfsg' make[2]: Leaving directory '/build/bison-3.8.2+dfsg' make[1]: Leaving directory '/build/bison-3.8.2+dfsg' debian/rules execute_after_dh_auto_install make[1]: Entering directory '/build/bison-3.8.2+dfsg' rm -f -R /build/bison-3.8.2+dfsg/debian/tmp/usr/share/info/ mv /build/bison-3.8.2+dfsg/debian/tmp/usr/bin/yacc /build/bison-3.8.2+dfsg/debian/tmp/usr/bin/bison.yacc mv /build/bison-3.8.2+dfsg/debian/tmp/usr/share/man/man1/yacc.1 \ /build/bison-3.8.2+dfsg/debian/tmp/usr/share/man/man1/bison.yacc.1 make[1]: Leaving directory '/build/bison-3.8.2+dfsg' dh_install dh_installdocs dh_installchangelogs dh_installexamples dh_installman dh_lintian dh_perl dh_link dh_strip_nondeterminism Garbage at end of string in strptime: +02:00 at /usr/lib/arm-linux-gnueabihf/perl/5.36/Time/Piece.pm line 598. Perhaps a format flag did not match the actual input? at /usr/lib/arm-linux-gnueabihf/perl/5.36/Time/Piece.pm line 598. Normalized debian/bison/usr/share/locale/de/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/gl/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/gl/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/nl/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/nl/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/nl/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/rw/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/eo/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/eo/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/eo/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/ro/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/ro/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/ro/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/hu/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/hu/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/pt_BR/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/pt_BR/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/pt_BR/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/sk/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/tr/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/tr/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/tr/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/fi/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/fi/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/fi/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/et/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/et/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/et/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/el/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/el/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/el/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/af/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/ga/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/ga/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/ga/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/sr/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/sr/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/ru/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/ru/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/ru/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/pl/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/pl/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/pl/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/id/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/id/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/cs/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/ko/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/sq/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/ky/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/ja/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/ja/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/ja/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/pt/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/pt/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/pt/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/vi/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/vi/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/vi/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/be/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/sv/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/sv/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/sv/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/it/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/it/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/it/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/hr/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/hr/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/es/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/es/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/es/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/de/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/de/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/ia/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/zh_TW/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/zh_TW/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/zh_TW/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/eu/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/fr/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/fr/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/fr/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/sl/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/sl/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/lt/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/nb/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/nb/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/nb/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/bg/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/bg/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/bg/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/ta/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/ca/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/ca/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/ca/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/uk/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/uk/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/uk/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/ast/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/ms/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/ms/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/ms/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/zh_CN/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/zh_CN/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/zh_CN/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/th/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/lv/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/da/LC_MESSAGES/bison-gnulib.mo Normalized debian/bison/usr/share/locale/da/LC_MESSAGES/bison.mo Normalized debian/bison/usr/share/locale/da/LC_MESSAGES/bison-runtime.mo Normalized debian/bison/usr/share/locale/sr/LC_MESSAGES/bison-gnulib.mo dh_compress dh_fixperms dh_missing dh_dwz dh_strip dh_makeshlibs dh_shlibdeps dh_installdeb dh_gencontrol dh_md5sums dh_builddeb dpkg-deb: building package 'libbison-dev' in '../libbison-dev_3.8.2+dfsg-1_armhf.deb'. dpkg-deb: building package 'bison-dbgsym' in '../bison-dbgsym_3.8.2+dfsg-1_armhf.deb'. dpkg-deb: building package 'bison' in '../bison_3.8.2+dfsg-1_armhf.deb'. dpkg-genbuildinfo --build=binary -O../bison_3.8.2+dfsg-1_armhf.buildinfo dpkg-genchanges --build=binary -O../bison_3.8.2+dfsg-1_armhf.changes dpkg-genchanges: info: binary-only upload (no source code included) dpkg-source --after-build . dpkg-buildpackage: info: binary-only upload (no source included) dpkg-genchanges: info: including full source code in upload I: copying local configuration I: unmounting dev/ptmx filesystem I: unmounting dev/pts filesystem I: unmounting dev/shm filesystem I: unmounting proc filesystem I: unmounting sys filesystem I: cleaning the build env I: removing directory /srv/workspace/pbuilder/23384 and its subdirectories I: Current time: Sat May 27 05:13:49 -12 2023 I: pbuilder-time-stamp: 1685207629 Sat May 27 17:16:10 UTC 2023 I: 1st build successful. Starting 2nd build on remote node virt64b-armhf-rb.debian.net. Sat May 27 17:16:10 UTC 2023 I: Preparing to do remote build '2' on virt64b-armhf-rb.debian.net. Sat May 27 18:21:21 UTC 2023 I: Deleting $TMPDIR on virt64b-armhf-rb.debian.net. Sat May 27 18:21:24 UTC 2023 I: bison_3.8.2+dfsg-1_armhf.changes: Format: 1.8 Date: Sat, 02 Oct 2021 16:25:58 -0700 Source: bison Binary: bison bison-dbgsym libbison-dev Architecture: armhf Version: 2:3.8.2+dfsg-1 Distribution: unstable Urgency: medium Maintainer: Chuan-kai Lin Changed-By: Chuan-kai Lin Description: bison - YACC-compatible parser generator libbison-dev - YACC-compatible parser generator - development library Changes: bison (2:3.8.2+dfsg-1) unstable; urgency=medium . * New upstream version 3.8.2+dfsg Checksums-Sha1: ba5e34229b39181df91335f40d08cb926d8502fe 746756 bison-dbgsym_3.8.2+dfsg-1_armhf.deb 6b3989667b6a0767aeb92d021a7a31e0606a010f 5342 bison_3.8.2+dfsg-1_armhf.buildinfo 6428fc76dbc01b53bde547c86a60ee0e25dd2d0b 1135744 bison_3.8.2+dfsg-1_armhf.deb 2617d35c35c6064c0ddf92b861b65033f8c8d4df 671792 libbison-dev_3.8.2+dfsg-1_armhf.deb Checksums-Sha256: 8d9382080558f92d680f4c737b246ee7ff311375eb900581d6d0b3f27132ca4c 746756 bison-dbgsym_3.8.2+dfsg-1_armhf.deb d90ae2ca13b5cf7cb1d38d688ea409184f38d485a7860120e6c61fee8c39e5c6 5342 bison_3.8.2+dfsg-1_armhf.buildinfo 1fd6366eb58ec09b30d4e0dfaebf940a1bd3d4a854ed56b2956277ec69f867c3 1135744 bison_3.8.2+dfsg-1_armhf.deb 7fd40de282b9327267b22b5c59e8255521fb360e734355397b03da9aef1e96fe 671792 libbison-dev_3.8.2+dfsg-1_armhf.deb Files: b7839a356523916bfb8d24a712c363a4 746756 debug optional bison-dbgsym_3.8.2+dfsg-1_armhf.deb d9c16330c9b9c707212ee7f05ac7d66f 5342 devel optional bison_3.8.2+dfsg-1_armhf.buildinfo 8b9b77c47f1d7f6418e6143eda163e6e 1135744 devel optional bison_3.8.2+dfsg-1_armhf.deb c7d78a4447f04acdab93c7fbb6234521 671792 libdevel optional libbison-dev_3.8.2+dfsg-1_armhf.deb Sat May 27 18:21:32 UTC 2023 I: diffoscope 242 will be used to compare the two builds: # Profiling output for: /usr/bin/diffoscope --timeout 7200 --html /srv/reproducible-results/rbuild-debian/r-b-build.7ctXGpJs/bison_3.8.2+dfsg-1.diffoscope.html --text /srv/reproducible-results/rbuild-debian/r-b-build.7ctXGpJs/bison_3.8.2+dfsg-1.diffoscope.txt --json /srv/reproducible-results/rbuild-debian/r-b-build.7ctXGpJs/bison_3.8.2+dfsg-1.diffoscope.json --profile=- /srv/reproducible-results/rbuild-debian/r-b-build.7ctXGpJs/b1/bison_3.8.2+dfsg-1_armhf.changes /srv/reproducible-results/rbuild-debian/r-b-build.7ctXGpJs/b2/bison_3.8.2+dfsg-1_armhf.changes ## command (total time: 0.000s) 0.000s 1 call cmp (internal) ## has_same_content_as (total time: 0.000s) 0.000s 1 call abc.DotChangesFile ## main (total time: 0.626s) 0.626s 2 calls outputs 0.000s 1 call cleanup ## recognizes (total time: 0.046s) 0.046s 12 calls diffoscope.comparators.binary.FilesystemFile 0.000s 10 calls abc.DotChangesFile ## specialize (total time: 0.000s) 0.000s 1 call specialize Sat May 27 18:21:34 UTC 2023 I: diffoscope 242 found no differences in the changes files, and a .buildinfo file also exists. Sat May 27 18:21:34 UTC 2023 I: bison from bookworm built successfully and reproducibly on armhf. Sat May 27 18:21:35 UTC 2023 I: Submitting .buildinfo files to external archives: Sat May 27 18:21:35 UTC 2023 I: Submitting 8.0K b1/bison_3.8.2+dfsg-1_armhf.buildinfo.asc Sat May 27 18:21:37 UTC 2023 I: Submitting 8.0K b2/bison_3.8.2+dfsg-1_armhf.buildinfo.asc Sat May 27 18:21:37 UTC 2023 I: Done submitting .buildinfo files to http://buildinfo.debian.net/api/submit. Sat May 27 18:21:37 UTC 2023 I: Done submitting .buildinfo files. Sat May 27 18:21:37 UTC 2023 I: Removing signed bison_3.8.2+dfsg-1_armhf.buildinfo.asc files: removed './b1/bison_3.8.2+dfsg-1_armhf.buildinfo.asc' removed './b2/bison_3.8.2+dfsg-1_armhf.buildinfo.asc'